Mar 08 21:55:32.179813 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 08 21:55:32.849639 master-0 kubenswrapper[3962]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:55:32.849639 master-0 kubenswrapper[3962]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 21:55:32.849639 master-0 kubenswrapper[3962]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:55:32.849639 master-0 kubenswrapper[3962]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:55:32.849639 master-0 kubenswrapper[3962]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 21:55:32.849639 master-0 kubenswrapper[3962]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:55:32.851585 master-0 kubenswrapper[3962]: I0308 21:55:32.850890 3962 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860859 3962 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860896 3962 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860906 3962 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860916 3962 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860925 3962 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860936 3962 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:55:32.860930 master-0 kubenswrapper[3962]: W0308 21:55:32.860945 3962 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.860955 3962 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.860972 3962 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.860980 3962 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.860992 3962 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861003 3962 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861011 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861019 3962 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861027 3962 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861035 3962 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861046 3962 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861057 3962 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861066 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861080 3962 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861112 3962 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861120 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861128 3962 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861136 3962 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861148 3962 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:55:32.861351 master-0 kubenswrapper[3962]: W0308 21:55:32.861158 3962 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861168 3962 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861177 3962 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861185 3962 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861192 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861201 3962 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861210 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861218 3962 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861226 3962 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861233 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861241 3962 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861249 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861257 3962 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861265 3962 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861272 3962 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861284 3962 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861293 3962 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861301 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861309 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861316 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:55:32.862211 master-0 kubenswrapper[3962]: W0308 21:55:32.861325 3962 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861333 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861341 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861349 3962 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861357 3962 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861366 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861375 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861383 3962 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861392 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861401 3962 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861409 3962 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861417 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861430 3962 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861440 3962 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861449 3962 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861458 3962 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861466 3962 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861474 3962 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861482 3962 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:55:32.863084 master-0 kubenswrapper[3962]: W0308 21:55:32.861489 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861497 3962 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861505 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861514 3962 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861522 3962 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861531 3962 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861538 3962 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: W0308 21:55:32.861546 3962 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862728 3962 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862757 3962 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862771 3962 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862782 3962 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862794 3962 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862803 3962 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862815 3962 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862827 3962 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862837 3962 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862845 3962 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862855 3962 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862865 3962 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862875 3962 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862883 3962 flags.go:64] FLAG: --cgroup-root="" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862892 3962 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 21:55:32.864118 master-0 kubenswrapper[3962]: I0308 21:55:32.862901 3962 flags.go:64] FLAG: --client-ca-file="" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862911 3962 flags.go:64] FLAG: --cloud-config="" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862919 3962 flags.go:64] FLAG: --cloud-provider="" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862928 3962 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862941 3962 flags.go:64] FLAG: --cluster-domain="" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862950 3962 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862959 3962 flags.go:64] FLAG: --config-dir="" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862968 3962 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862978 3962 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862989 3962 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.862998 3962 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863007 3962 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863017 3962 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863026 3962 flags.go:64] FLAG: --contention-profiling="false" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863035 3962 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863044 3962 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863054 3962 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863065 3962 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863113 3962 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863123 3962 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863134 3962 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863143 3962 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863153 3962 flags.go:64] FLAG: --enable-server="true" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863163 3962 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863174 3962 flags.go:64] FLAG: --event-burst="100" Mar 08 21:55:32.865163 master-0 kubenswrapper[3962]: I0308 21:55:32.863184 3962 flags.go:64] FLAG: --event-qps="50" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863194 3962 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863203 3962 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863212 3962 flags.go:64] FLAG: --eviction-hard="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863223 3962 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863232 3962 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863241 3962 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863251 3962 flags.go:64] FLAG: --eviction-soft="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863260 3962 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863269 3962 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863279 3962 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863288 3962 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863297 3962 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863306 3962 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863315 3962 flags.go:64] FLAG: --feature-gates="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863326 3962 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863336 3962 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863345 3962 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863354 3962 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863364 3962 flags.go:64] FLAG: --healthz-port="10248" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863373 3962 flags.go:64] FLAG: --help="false" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863382 3962 flags.go:64] FLAG: --hostname-override="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863390 3962 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863400 3962 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863410 3962 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 21:55:32.866357 master-0 kubenswrapper[3962]: I0308 21:55:32.863419 3962 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863428 3962 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863438 3962 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863447 3962 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863456 3962 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863464 3962 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863473 3962 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863483 3962 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863492 3962 flags.go:64] FLAG: --kube-reserved="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863502 3962 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863510 3962 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863524 3962 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863605 3962 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863614 3962 flags.go:64] FLAG: --lock-file="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863624 3962 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863633 3962 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863642 3962 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863655 3962 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863664 3962 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863674 3962 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863683 3962 flags.go:64] FLAG: --logging-format="text" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863692 3962 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863702 3962 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863712 3962 flags.go:64] FLAG: --manifest-url="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863722 3962 flags.go:64] FLAG: --manifest-url-header="" Mar 08 21:55:32.867471 master-0 kubenswrapper[3962]: I0308 21:55:32.863734 3962 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863743 3962 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863754 3962 flags.go:64] FLAG: --max-pods="110" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863763 3962 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863772 3962 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863782 3962 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863794 3962 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863803 3962 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863812 3962 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863821 3962 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863841 3962 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863851 3962 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863860 3962 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863870 3962 flags.go:64] FLAG: --pod-cidr="" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863879 3962 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863894 3962 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863903 3962 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863912 3962 flags.go:64] FLAG: --pods-per-core="0" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863921 3962 flags.go:64] FLAG: --port="10250" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863931 3962 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863940 3962 flags.go:64] FLAG: --provider-id="" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863949 3962 flags.go:64] FLAG: --qos-reserved="" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863958 3962 flags.go:64] FLAG: --read-only-port="10255" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863968 3962 flags.go:64] FLAG: --register-node="true" Mar 08 21:55:32.868646 master-0 kubenswrapper[3962]: I0308 21:55:32.863977 3962 flags.go:64] FLAG: --register-schedulable="true" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.863986 3962 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864000 3962 flags.go:64] FLAG: --registry-burst="10" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864009 3962 flags.go:64] FLAG: --registry-qps="5" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864018 3962 flags.go:64] FLAG: --reserved-cpus="" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864027 3962 flags.go:64] FLAG: --reserved-memory="" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864038 3962 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864047 3962 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864057 3962 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864066 3962 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864075 3962 flags.go:64] FLAG: --runonce="false" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864112 3962 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864123 3962 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864133 3962 flags.go:64] FLAG: --seccomp-default="false" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864144 3962 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864164 3962 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864173 3962 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864184 3962 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864193 3962 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864202 3962 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864212 3962 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864221 3962 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864230 3962 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864240 3962 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864249 3962 flags.go:64] FLAG: --system-cgroups="" Mar 08 21:55:32.869743 master-0 kubenswrapper[3962]: I0308 21:55:32.864261 3962 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864276 3962 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864286 3962 flags.go:64] FLAG: --tls-cert-file="" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864295 3962 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864307 3962 flags.go:64] FLAG: --tls-min-version="" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864316 3962 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864325 3962 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864335 3962 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864344 3962 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864353 3962 flags.go:64] FLAG: --v="2" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864365 3962 flags.go:64] FLAG: --version="false" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864377 3962 flags.go:64] FLAG: --vmodule="" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864387 3962 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: I0308 21:55:32.864397 3962 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864609 3962 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864619 3962 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864628 3962 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864636 3962 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864645 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864653 3962 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864663 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864671 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864685 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:55:32.870882 master-0 kubenswrapper[3962]: W0308 21:55:32.864694 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864702 3962 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864711 3962 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864719 3962 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864728 3962 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864737 3962 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864745 3962 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864752 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864763 3962 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864773 3962 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864783 3962 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864791 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864800 3962 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864809 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864820 3962 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864829 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864838 3962 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864846 3962 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864855 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:55:32.871940 master-0 kubenswrapper[3962]: W0308 21:55:32.864863 3962 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864871 3962 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864879 3962 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864886 3962 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864894 3962 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864902 3962 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864910 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864918 3962 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864926 3962 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864935 3962 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864942 3962 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864950 3962 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864962 3962 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864972 3962 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864982 3962 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864991 3962 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.864999 3962 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.865009 3962 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.865017 3962 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:55:32.872929 master-0 kubenswrapper[3962]: W0308 21:55:32.865026 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865035 3962 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865043 3962 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865051 3962 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865061 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865076 3962 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865085 3962 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865116 3962 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865125 3962 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865133 3962 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865142 3962 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865151 3962 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865160 3962 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865168 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865178 3962 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865187 3962 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865196 3962 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865206 3962 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865214 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865226 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:55:32.873813 master-0 kubenswrapper[3962]: W0308 21:55:32.865237 3962 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:55:32.874833 master-0 kubenswrapper[3962]: W0308 21:55:32.865247 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:55:32.874833 master-0 kubenswrapper[3962]: W0308 21:55:32.865257 3962 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:55:32.874833 master-0 kubenswrapper[3962]: W0308 21:55:32.865267 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:55:32.874833 master-0 kubenswrapper[3962]: W0308 21:55:32.865276 3962 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:55:32.874833 master-0 kubenswrapper[3962]: I0308 21:55:32.865303 3962 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 21:55:32.880613 master-0 kubenswrapper[3962]: I0308 21:55:32.880519 3962 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 08 21:55:32.880613 master-0 kubenswrapper[3962]: I0308 21:55:32.880605 3962 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 21:55:32.880860 master-0 kubenswrapper[3962]: W0308 21:55:32.880811 3962 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:55:32.880860 master-0 kubenswrapper[3962]: W0308 21:55:32.880851 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880865 3962 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880880 3962 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880893 3962 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880904 3962 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880916 3962 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880927 3962 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880939 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880950 3962 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880961 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880971 3962 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:55:32.880972 master-0 kubenswrapper[3962]: W0308 21:55:32.880983 3962 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.880995 3962 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881007 3962 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881016 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881025 3962 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881034 3962 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881043 3962 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881052 3962 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881060 3962 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881077 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881158 3962 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881171 3962 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881183 3962 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881195 3962 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881206 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881217 3962 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881227 3962 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881238 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881249 3962 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881260 3962 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:55:32.881481 master-0 kubenswrapper[3962]: W0308 21:55:32.881270 3962 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881278 3962 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881302 3962 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881311 3962 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881320 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881329 3962 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881337 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881347 3962 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881355 3962 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881363 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881375 3962 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881393 3962 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881402 3962 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881413 3962 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881423 3962 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881432 3962 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881441 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881451 3962 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881460 3962 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881469 3962 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:55:32.882368 master-0 kubenswrapper[3962]: W0308 21:55:32.881478 3962 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881490 3962 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881501 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881514 3962 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881524 3962 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881533 3962 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881542 3962 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881550 3962 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881562 3962 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881572 3962 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881581 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881589 3962 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881598 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881607 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881615 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881624 3962 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881632 3962 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881643 3962 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:55:32.883318 master-0 kubenswrapper[3962]: W0308 21:55:32.881657 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881668 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: I0308 21:55:32.881684 3962 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881934 3962 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881951 3962 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881962 3962 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881972 3962 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881985 3962 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.881995 3962 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882007 3962 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882021 3962 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882034 3962 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882046 3962 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882057 3962 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882078 3962 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:55:32.884285 master-0 kubenswrapper[3962]: W0308 21:55:32.882119 3962 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882132 3962 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882147 3962 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882164 3962 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882175 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882187 3962 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882199 3962 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882211 3962 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882222 3962 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882233 3962 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882244 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882254 3962 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882262 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882271 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882279 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882287 3962 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882296 3962 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882305 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882313 3962 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882322 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:55:32.885081 master-0 kubenswrapper[3962]: W0308 21:55:32.882330 3962 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882342 3962 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882351 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882359 3962 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882368 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882377 3962 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882386 3962 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882396 3962 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882405 3962 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882413 3962 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882422 3962 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882430 3962 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882439 3962 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882447 3962 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882456 3962 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882464 3962 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882475 3962 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882487 3962 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882498 3962 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:55:32.886244 master-0 kubenswrapper[3962]: W0308 21:55:32.882507 3962 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882516 3962 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882525 3962 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882536 3962 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882547 3962 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882555 3962 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882564 3962 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882572 3962 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882581 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882590 3962 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882599 3962 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882607 3962 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882616 3962 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882624 3962 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882634 3962 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882643 3962 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882651 3962 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882660 3962 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882670 3962 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882681 3962 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:55:32.887067 master-0 kubenswrapper[3962]: W0308 21:55:32.882690 3962 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:55:32.887973 master-0 kubenswrapper[3962]: I0308 21:55:32.882705 3962 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 21:55:32.887973 master-0 kubenswrapper[3962]: I0308 21:55:32.883137 3962 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 21:55:32.888274 master-0 kubenswrapper[3962]: I0308 21:55:32.888208 3962 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 08 21:55:32.889779 master-0 kubenswrapper[3962]: I0308 21:55:32.889717 3962 server.go:997] "Starting client certificate rotation" Mar 08 21:55:32.889779 master-0 kubenswrapper[3962]: I0308 21:55:32.889781 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 21:55:32.890168 master-0 kubenswrapper[3962]: I0308 21:55:32.890053 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 21:55:32.925551 master-0 kubenswrapper[3962]: I0308 21:55:32.925443 3962 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 21:55:32.935571 master-0 kubenswrapper[3962]: E0308 21:55:32.935447 3962 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:32.936776 master-0 kubenswrapper[3962]: I0308 21:55:32.936697 3962 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 21:55:32.961699 master-0 kubenswrapper[3962]: I0308 21:55:32.961613 3962 log.go:25] "Validated CRI v1 runtime API" Mar 08 21:55:32.969258 master-0 kubenswrapper[3962]: I0308 21:55:32.969179 3962 log.go:25] "Validated CRI v1 image API" Mar 08 21:55:32.972532 master-0 kubenswrapper[3962]: I0308 21:55:32.972472 3962 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 21:55:32.977945 master-0 kubenswrapper[3962]: I0308 21:55:32.977869 3962 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f06a6435-a0b4-459f-8b49-c9a78e9e4f0c:/dev/vda3] Mar 08 21:55:32.977945 master-0 kubenswrapper[3962]: I0308 21:55:32.977920 3962 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 08 21:55:32.993868 master-0 kubenswrapper[3962]: I0308 21:55:32.993535 3962 manager.go:217] Machine: {Timestamp:2026-03-08 21:55:32.991053341 +0000 UTC m=+0.624325563 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:60bd3117f077456eaef79571349311b3 SystemUUID:60bd3117-f077-456e-aef7-9571349311b3 BootID:6ad049a3-699b-4e1d-9b55-0bbdfa29d597 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:0e:40:5e Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:52:ad:85:17:24:3e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 21:55:32.993868 master-0 kubenswrapper[3962]: I0308 21:55:32.993821 3962 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 21:55:32.994240 master-0 kubenswrapper[3962]: I0308 21:55:32.994148 3962 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 21:55:32.994567 master-0 kubenswrapper[3962]: I0308 21:55:32.994498 3962 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 21:55:32.994814 master-0 kubenswrapper[3962]: I0308 21:55:32.994748 3962 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 21:55:32.995133 master-0 kubenswrapper[3962]: I0308 21:55:32.994795 3962 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 21:55:32.995133 master-0 kubenswrapper[3962]: I0308 21:55:32.995118 3962 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 21:55:32.995133 master-0 kubenswrapper[3962]: I0308 21:55:32.995132 3962 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 21:55:32.995338 master-0 kubenswrapper[3962]: I0308 21:55:32.995201 3962 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 21:55:32.995338 master-0 kubenswrapper[3962]: I0308 21:55:32.995232 3962 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 21:55:32.996076 master-0 kubenswrapper[3962]: I0308 21:55:32.996027 3962 state_mem.go:36] "Initialized new in-memory state store" Mar 08 21:55:32.996260 master-0 kubenswrapper[3962]: I0308 21:55:32.996217 3962 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 21:55:33.002279 master-0 kubenswrapper[3962]: I0308 21:55:33.002233 3962 kubelet.go:418] "Attempting to sync node with API server" Mar 08 21:55:33.002279 master-0 kubenswrapper[3962]: I0308 21:55:33.002266 3962 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 21:55:33.002455 master-0 kubenswrapper[3962]: I0308 21:55:33.002296 3962 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 21:55:33.002455 master-0 kubenswrapper[3962]: I0308 21:55:33.002315 3962 kubelet.go:324] "Adding apiserver pod source" Mar 08 21:55:33.002455 master-0 kubenswrapper[3962]: I0308 21:55:33.002335 3962 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 21:55:33.014794 master-0 kubenswrapper[3962]: W0308 21:55:33.014606 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:33.014794 master-0 kubenswrapper[3962]: W0308 21:55:33.014707 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:33.015016 master-0 kubenswrapper[3962]: E0308 21:55:33.014824 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:33.015016 master-0 kubenswrapper[3962]: E0308 21:55:33.014876 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:33.016514 master-0 kubenswrapper[3962]: I0308 21:55:33.016457 3962 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 08 21:55:33.019274 master-0 kubenswrapper[3962]: I0308 21:55:33.019216 3962 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 21:55:33.019653 master-0 kubenswrapper[3962]: I0308 21:55:33.019600 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 21:55:33.019653 master-0 kubenswrapper[3962]: I0308 21:55:33.019640 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 21:55:33.019653 master-0 kubenswrapper[3962]: I0308 21:55:33.019654 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019670 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019686 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019700 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019714 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019727 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019743 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019757 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019776 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 21:55:33.019897 master-0 kubenswrapper[3962]: I0308 21:55:33.019864 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 21:55:33.021307 master-0 kubenswrapper[3962]: I0308 21:55:33.021261 3962 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 21:55:33.022244 master-0 kubenswrapper[3962]: I0308 21:55:33.022186 3962 server.go:1280] "Started kubelet" Mar 08 21:55:33.022912 master-0 kubenswrapper[3962]: I0308 21:55:33.022815 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:33.023194 master-0 kubenswrapper[3962]: I0308 21:55:33.023111 3962 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 21:55:33.023274 master-0 kubenswrapper[3962]: I0308 21:55:33.023139 3962 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 21:55:33.023367 master-0 kubenswrapper[3962]: I0308 21:55:33.023326 3962 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 08 21:55:33.024355 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 08 21:55:33.025640 master-0 kubenswrapper[3962]: I0308 21:55:33.024420 3962 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 21:55:33.026675 master-0 kubenswrapper[3962]: I0308 21:55:33.026622 3962 server.go:449] "Adding debug handlers to kubelet server" Mar 08 21:55:33.027755 master-0 kubenswrapper[3962]: I0308 21:55:33.027626 3962 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 21:55:33.027865 master-0 kubenswrapper[3962]: I0308 21:55:33.027820 3962 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 21:55:33.027999 master-0 kubenswrapper[3962]: I0308 21:55:33.027949 3962 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 21:55:33.027999 master-0 kubenswrapper[3962]: I0308 21:55:33.027970 3962 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 21:55:33.028155 master-0 kubenswrapper[3962]: E0308 21:55:33.028017 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:55:33.028393 master-0 kubenswrapper[3962]: I0308 21:55:33.028351 3962 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 08 21:55:33.029546 master-0 kubenswrapper[3962]: I0308 21:55:33.029494 3962 reconstruct.go:97] "Volume reconstruction finished" Mar 08 21:55:33.029546 master-0 kubenswrapper[3962]: I0308 21:55:33.029536 3962 reconciler.go:26] "Reconciler: start to sync state" Mar 08 21:55:33.029737 master-0 kubenswrapper[3962]: I0308 21:55:33.029716 3962 factory.go:55] Registering systemd factory Mar 08 21:55:33.029737 master-0 kubenswrapper[3962]: I0308 21:55:33.029739 3962 factory.go:221] Registration of the systemd container factory successfully Mar 08 21:55:33.030135 master-0 kubenswrapper[3962]: E0308 21:55:33.029747 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 08 21:55:33.030364 master-0 kubenswrapper[3962]: I0308 21:55:33.030269 3962 factory.go:153] Registering CRI-O factory Mar 08 21:55:33.030364 master-0 kubenswrapper[3962]: I0308 21:55:33.030361 3962 factory.go:221] Registration of the crio container factory successfully Mar 08 21:55:33.030493 master-0 kubenswrapper[3962]: W0308 21:55:33.030066 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:33.030573 master-0 kubenswrapper[3962]: E0308 21:55:33.030532 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:33.030681 master-0 kubenswrapper[3962]: I0308 21:55:33.030576 3962 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 21:55:33.030921 master-0 kubenswrapper[3962]: I0308 21:55:33.030714 3962 factory.go:103] Registering Raw factory Mar 08 21:55:33.031009 master-0 kubenswrapper[3962]: I0308 21:55:33.030981 3962 manager.go:1196] Started watching for new ooms in manager Mar 08 21:55:33.031194 master-0 kubenswrapper[3962]: E0308 21:55:33.029745 3962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189afc696b142f70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.021962096 +0000 UTC m=+0.655234328,LastTimestamp:2026-03-08 21:55:33.021962096 +0000 UTC m=+0.655234328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:33.032923 master-0 kubenswrapper[3962]: I0308 21:55:33.032872 3962 manager.go:319] Starting recovery of all containers Mar 08 21:55:33.063782 master-0 kubenswrapper[3962]: I0308 21:55:33.063280 3962 manager.go:324] Recovery completed Mar 08 21:55:33.076544 master-0 kubenswrapper[3962]: I0308 21:55:33.076464 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.079446 master-0 kubenswrapper[3962]: I0308 21:55:33.079290 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.079446 master-0 kubenswrapper[3962]: I0308 21:55:33.079394 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.079446 master-0 kubenswrapper[3962]: I0308 21:55:33.079421 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.080903 master-0 kubenswrapper[3962]: I0308 21:55:33.080874 3962 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 21:55:33.080967 master-0 kubenswrapper[3962]: I0308 21:55:33.080931 3962 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 21:55:33.081038 master-0 kubenswrapper[3962]: I0308 21:55:33.081013 3962 state_mem.go:36] "Initialized new in-memory state store" Mar 08 21:55:33.086939 master-0 kubenswrapper[3962]: I0308 21:55:33.086870 3962 policy_none.go:49] "None policy: Start" Mar 08 21:55:33.089575 master-0 kubenswrapper[3962]: I0308 21:55:33.089524 3962 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 21:55:33.089646 master-0 kubenswrapper[3962]: I0308 21:55:33.089584 3962 state_mem.go:35] "Initializing new in-memory state store" Mar 08 21:55:33.129340 master-0 kubenswrapper[3962]: E0308 21:55:33.129276 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:55:33.157034 master-0 kubenswrapper[3962]: I0308 21:55:33.156985 3962 manager.go:334] "Starting Device Plugin manager" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.157139 3962 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.157193 3962 server.go:79] "Starting device plugin registration server" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.157935 3962 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.157958 3962 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.158258 3962 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.158381 3962 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: I0308 21:55:33.158391 3962 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 21:55:33.181132 master-0 kubenswrapper[3962]: E0308 21:55:33.159848 3962 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 21:55:33.182775 master-0 kubenswrapper[3962]: I0308 21:55:33.182665 3962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 21:55:33.185984 master-0 kubenswrapper[3962]: I0308 21:55:33.185935 3962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 21:55:33.186069 master-0 kubenswrapper[3962]: I0308 21:55:33.186034 3962 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 21:55:33.186174 master-0 kubenswrapper[3962]: I0308 21:55:33.186133 3962 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 21:55:33.186304 master-0 kubenswrapper[3962]: E0308 21:55:33.186255 3962 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 08 21:55:33.188787 master-0 kubenswrapper[3962]: W0308 21:55:33.188693 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:33.188844 master-0 kubenswrapper[3962]: E0308 21:55:33.188813 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:33.231927 master-0 kubenswrapper[3962]: E0308 21:55:33.231763 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 08 21:55:33.259104 master-0 kubenswrapper[3962]: I0308 21:55:33.258974 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.260541 master-0 kubenswrapper[3962]: I0308 21:55:33.260493 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.260667 master-0 kubenswrapper[3962]: I0308 21:55:33.260556 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.260667 master-0 kubenswrapper[3962]: I0308 21:55:33.260569 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.260667 master-0 kubenswrapper[3962]: I0308 21:55:33.260605 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:33.267481 master-0 kubenswrapper[3962]: E0308 21:55:33.267399 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 21:55:33.286613 master-0 kubenswrapper[3962]: I0308 21:55:33.286519 3962 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 21:55:33.286613 master-0 kubenswrapper[3962]: I0308 21:55:33.286611 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.287831 master-0 kubenswrapper[3962]: I0308 21:55:33.287780 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.287831 master-0 kubenswrapper[3962]: I0308 21:55:33.287828 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.287831 master-0 kubenswrapper[3962]: I0308 21:55:33.287838 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.288017 master-0 kubenswrapper[3962]: I0308 21:55:33.288007 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.288384 master-0 kubenswrapper[3962]: I0308 21:55:33.288328 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.288463 master-0 kubenswrapper[3962]: I0308 21:55:33.288394 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.289158 master-0 kubenswrapper[3962]: I0308 21:55:33.289118 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.289158 master-0 kubenswrapper[3962]: I0308 21:55:33.289157 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.289304 master-0 kubenswrapper[3962]: I0308 21:55:33.289170 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.289362 master-0 kubenswrapper[3962]: I0308 21:55:33.289338 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.289362 master-0 kubenswrapper[3962]: I0308 21:55:33.289357 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.289362 master-0 kubenswrapper[3962]: I0308 21:55:33.289365 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.289521 master-0 kubenswrapper[3962]: I0308 21:55:33.289404 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.289700 master-0 kubenswrapper[3962]: I0308 21:55:33.289626 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.289700 master-0 kubenswrapper[3962]: I0308 21:55:33.289664 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.290065 master-0 kubenswrapper[3962]: I0308 21:55:33.290021 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.290065 master-0 kubenswrapper[3962]: I0308 21:55:33.290050 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.290065 master-0 kubenswrapper[3962]: I0308 21:55:33.290060 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.290273 master-0 kubenswrapper[3962]: I0308 21:55:33.290170 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.290383 master-0 kubenswrapper[3962]: I0308 21:55:33.290349 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.290383 master-0 kubenswrapper[3962]: I0308 21:55:33.290376 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.290501 master-0 kubenswrapper[3962]: I0308 21:55:33.290480 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.290501 master-0 kubenswrapper[3962]: I0308 21:55:33.290497 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.290629 master-0 kubenswrapper[3962]: I0308 21:55:33.290506 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.290875 master-0 kubenswrapper[3962]: I0308 21:55:33.290839 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.290939 master-0 kubenswrapper[3962]: I0308 21:55:33.290872 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.290939 master-0 kubenswrapper[3962]: I0308 21:55:33.290911 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.291048 master-0 kubenswrapper[3962]: I0308 21:55:33.291030 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.291178 master-0 kubenswrapper[3962]: I0308 21:55:33.291149 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.291243 master-0 kubenswrapper[3962]: I0308 21:55:33.291184 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.291455 master-0 kubenswrapper[3962]: I0308 21:55:33.291312 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.291553 master-0 kubenswrapper[3962]: I0308 21:55:33.291466 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.291553 master-0 kubenswrapper[3962]: I0308 21:55:33.291526 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.291988 master-0 kubenswrapper[3962]: I0308 21:55:33.291947 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.291988 master-0 kubenswrapper[3962]: I0308 21:55:33.291967 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.291988 master-0 kubenswrapper[3962]: I0308 21:55:33.291976 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.292221 master-0 kubenswrapper[3962]: I0308 21:55:33.292065 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.292221 master-0 kubenswrapper[3962]: I0308 21:55:33.292134 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.292221 master-0 kubenswrapper[3962]: I0308 21:55:33.292155 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.292221 master-0 kubenswrapper[3962]: I0308 21:55:33.292161 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.292431 master-0 kubenswrapper[3962]: I0308 21:55:33.292179 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.293635 master-0 kubenswrapper[3962]: I0308 21:55:33.293575 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.293715 master-0 kubenswrapper[3962]: I0308 21:55:33.293663 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.293715 master-0 kubenswrapper[3962]: I0308 21:55:33.293683 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.331720 master-0 kubenswrapper[3962]: I0308 21:55:33.331622 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.331720 master-0 kubenswrapper[3962]: I0308 21:55:33.331711 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.331902 master-0 kubenswrapper[3962]: I0308 21:55:33.331784 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.331902 master-0 kubenswrapper[3962]: I0308 21:55:33.331856 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.332032 master-0 kubenswrapper[3962]: I0308 21:55:33.331974 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.332242 master-0 kubenswrapper[3962]: I0308 21:55:33.332178 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.332310 master-0 kubenswrapper[3962]: I0308 21:55:33.332254 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.332372 master-0 kubenswrapper[3962]: I0308 21:55:33.332321 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.332439 master-0 kubenswrapper[3962]: I0308 21:55:33.332375 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.332439 master-0 kubenswrapper[3962]: I0308 21:55:33.332411 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.332554 master-0 kubenswrapper[3962]: I0308 21:55:33.332456 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.332554 master-0 kubenswrapper[3962]: I0308 21:55:33.332505 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.332665 master-0 kubenswrapper[3962]: I0308 21:55:33.332549 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.332665 master-0 kubenswrapper[3962]: I0308 21:55:33.332600 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.332665 master-0 kubenswrapper[3962]: I0308 21:55:33.332651 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.332828 master-0 kubenswrapper[3962]: I0308 21:55:33.332699 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.332828 master-0 kubenswrapper[3962]: I0308 21:55:33.332748 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.433808 master-0 kubenswrapper[3962]: I0308 21:55:33.433629 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.433808 master-0 kubenswrapper[3962]: I0308 21:55:33.433806 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434150 master-0 kubenswrapper[3962]: I0308 21:55:33.433749 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.434150 master-0 kubenswrapper[3962]: I0308 21:55:33.433933 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434150 master-0 kubenswrapper[3962]: I0308 21:55:33.434042 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434331 master-0 kubenswrapper[3962]: I0308 21:55:33.434094 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434331 master-0 kubenswrapper[3962]: I0308 21:55:33.434185 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434331 master-0 kubenswrapper[3962]: I0308 21:55:33.434122 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434331 master-0 kubenswrapper[3962]: I0308 21:55:33.434225 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434331 master-0 kubenswrapper[3962]: I0308 21:55:33.434267 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434331 master-0 kubenswrapper[3962]: I0308 21:55:33.434311 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434335 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434381 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434409 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434438 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434486 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434542 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434590 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434610 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434623 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434646 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434644 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.434650 master-0 kubenswrapper[3962]: I0308 21:55:33.434680 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434708 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434721 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434775 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434786 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434825 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434854 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434877 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434882 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434902 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434827 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.435525 master-0 kubenswrapper[3962]: I0308 21:55:33.434935 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.468303 master-0 kubenswrapper[3962]: I0308 21:55:33.468190 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.469902 master-0 kubenswrapper[3962]: I0308 21:55:33.469838 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.469902 master-0 kubenswrapper[3962]: I0308 21:55:33.469881 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.469902 master-0 kubenswrapper[3962]: I0308 21:55:33.469894 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.470239 master-0 kubenswrapper[3962]: I0308 21:55:33.469978 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:33.470978 master-0 kubenswrapper[3962]: E0308 21:55:33.470910 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 21:55:33.618123 master-0 kubenswrapper[3962]: I0308 21:55:33.617972 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:55:33.626247 master-0 kubenswrapper[3962]: I0308 21:55:33.626148 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:55:33.633862 master-0 kubenswrapper[3962]: E0308 21:55:33.633757 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 08 21:55:33.641154 master-0 kubenswrapper[3962]: I0308 21:55:33.641050 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:33.668013 master-0 kubenswrapper[3962]: I0308 21:55:33.667886 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:33.678522 master-0 kubenswrapper[3962]: I0308 21:55:33.678408 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:55:33.872000 master-0 kubenswrapper[3962]: I0308 21:55:33.871893 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:33.874134 master-0 kubenswrapper[3962]: I0308 21:55:33.873986 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:33.874134 master-0 kubenswrapper[3962]: I0308 21:55:33.874109 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:33.874134 master-0 kubenswrapper[3962]: I0308 21:55:33.874133 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:33.874313 master-0 kubenswrapper[3962]: I0308 21:55:33.874232 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:33.875743 master-0 kubenswrapper[3962]: E0308 21:55:33.875673 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 21:55:34.025137 master-0 kubenswrapper[3962]: I0308 21:55:34.025047 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:34.072850 master-0 kubenswrapper[3962]: W0308 21:55:34.072716 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:34.072850 master-0 kubenswrapper[3962]: E0308 21:55:34.072839 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:34.212014 master-0 kubenswrapper[3962]: W0308 21:55:34.211740 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:34.212014 master-0 kubenswrapper[3962]: E0308 21:55:34.211861 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:34.424396 master-0 kubenswrapper[3962]: W0308 21:55:34.424201 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:34.424396 master-0 kubenswrapper[3962]: E0308 21:55:34.424347 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:34.436167 master-0 kubenswrapper[3962]: E0308 21:55:34.436027 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 08 21:55:34.633329 master-0 kubenswrapper[3962]: W0308 21:55:34.633153 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:34.633329 master-0 kubenswrapper[3962]: E0308 21:55:34.633321 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:34.676389 master-0 kubenswrapper[3962]: I0308 21:55:34.676202 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:34.678321 master-0 kubenswrapper[3962]: I0308 21:55:34.678258 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:34.678438 master-0 kubenswrapper[3962]: I0308 21:55:34.678326 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:34.678438 master-0 kubenswrapper[3962]: I0308 21:55:34.678346 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:34.678438 master-0 kubenswrapper[3962]: I0308 21:55:34.678404 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:34.679741 master-0 kubenswrapper[3962]: E0308 21:55:34.679666 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 21:55:35.024873 master-0 kubenswrapper[3962]: I0308 21:55:35.024631 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:35.110463 master-0 kubenswrapper[3962]: I0308 21:55:35.110274 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 21:55:35.112323 master-0 kubenswrapper[3962]: E0308 21:55:35.112261 3962 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:35.521255 master-0 kubenswrapper[3962]: W0308 21:55:35.521163 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52 WatchSource:0}: Error finding container c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52: Status 404 returned error can't find the container with id c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52 Mar 08 21:55:35.528304 master-0 kubenswrapper[3962]: I0308 21:55:35.528248 3962 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 21:55:35.530708 master-0 kubenswrapper[3962]: W0308 21:55:35.530650 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f WatchSource:0}: Error finding container 5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f: Status 404 returned error can't find the container with id 5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f Mar 08 21:55:35.576955 master-0 kubenswrapper[3962]: W0308 21:55:35.576886 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0 WatchSource:0}: Error finding container c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0: Status 404 returned error can't find the container with id c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0 Mar 08 21:55:35.644042 master-0 kubenswrapper[3962]: W0308 21:55:35.643957 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c WatchSource:0}: Error finding container d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c: Status 404 returned error can't find the container with id d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c Mar 08 21:55:35.756496 master-0 kubenswrapper[3962]: W0308 21:55:35.756362 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f WatchSource:0}: Error finding container eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f: Status 404 returned error can't find the container with id eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f Mar 08 21:55:35.880120 master-0 kubenswrapper[3962]: E0308 21:55:35.879881 3962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189afc696b142f70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.021962096 +0000 UTC m=+0.655234328,LastTimestamp:2026-03-08 21:55:33.021962096 +0000 UTC m=+0.655234328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:36.024898 master-0 kubenswrapper[3962]: I0308 21:55:36.024665 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:36.038323 master-0 kubenswrapper[3962]: E0308 21:55:36.038222 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 08 21:55:36.200054 master-0 kubenswrapper[3962]: I0308 21:55:36.199875 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c"} Mar 08 21:55:36.201453 master-0 kubenswrapper[3962]: I0308 21:55:36.201428 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0"} Mar 08 21:55:36.202641 master-0 kubenswrapper[3962]: I0308 21:55:36.202618 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f"} Mar 08 21:55:36.204361 master-0 kubenswrapper[3962]: I0308 21:55:36.204334 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52"} Mar 08 21:55:36.206143 master-0 kubenswrapper[3962]: I0308 21:55:36.206053 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f"} Mar 08 21:55:36.280192 master-0 kubenswrapper[3962]: I0308 21:55:36.280041 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:36.283875 master-0 kubenswrapper[3962]: I0308 21:55:36.283827 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:36.283875 master-0 kubenswrapper[3962]: I0308 21:55:36.283877 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:36.283974 master-0 kubenswrapper[3962]: I0308 21:55:36.283890 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:36.283974 master-0 kubenswrapper[3962]: I0308 21:55:36.283952 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:36.285006 master-0 kubenswrapper[3962]: E0308 21:55:36.284970 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 21:55:36.725032 master-0 kubenswrapper[3962]: W0308 21:55:36.724968 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:36.725032 master-0 kubenswrapper[3962]: E0308 21:55:36.725042 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:36.904207 master-0 kubenswrapper[3962]: W0308 21:55:36.904132 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:36.904207 master-0 kubenswrapper[3962]: E0308 21:55:36.904204 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:37.025229 master-0 kubenswrapper[3962]: I0308 21:55:37.025062 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:37.074613 master-0 kubenswrapper[3962]: W0308 21:55:37.074543 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:37.074613 master-0 kubenswrapper[3962]: E0308 21:55:37.074614 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:37.306578 master-0 kubenswrapper[3962]: W0308 21:55:37.306157 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:37.306669 master-0 kubenswrapper[3962]: E0308 21:55:37.306608 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:38.024701 master-0 kubenswrapper[3962]: I0308 21:55:38.024615 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:38.213927 master-0 kubenswrapper[3962]: I0308 21:55:38.213358 3962 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="19b1636ab72d9a9b9983713d62f8565fb7c16719c6345915ce9c3d89fbded136" exitCode=0 Mar 08 21:55:38.213927 master-0 kubenswrapper[3962]: I0308 21:55:38.213412 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"19b1636ab72d9a9b9983713d62f8565fb7c16719c6345915ce9c3d89fbded136"} Mar 08 21:55:38.213927 master-0 kubenswrapper[3962]: I0308 21:55:38.213536 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:38.215463 master-0 kubenswrapper[3962]: I0308 21:55:38.214908 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:38.215463 master-0 kubenswrapper[3962]: I0308 21:55:38.214932 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:38.215463 master-0 kubenswrapper[3962]: I0308 21:55:38.214941 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:39.026378 master-0 kubenswrapper[3962]: I0308 21:55:39.026237 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:39.218948 master-0 kubenswrapper[3962]: I0308 21:55:39.218823 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"f40be1d4a754000339d3870a29f35b23044b2b81588631c57cf192ab4e70d6fd"} Mar 08 21:55:39.218948 master-0 kubenswrapper[3962]: I0308 21:55:39.218948 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"ca95d22d6228d434ce4ed2f415b15a00e7effc076e30de148f0569774a6d01db"} Mar 08 21:55:39.219731 master-0 kubenswrapper[3962]: I0308 21:55:39.219123 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:39.220701 master-0 kubenswrapper[3962]: I0308 21:55:39.220648 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:39.223992 master-0 kubenswrapper[3962]: I0308 21:55:39.220977 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:39.223992 master-0 kubenswrapper[3962]: I0308 21:55:39.221389 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:39.225234 master-0 kubenswrapper[3962]: I0308 21:55:39.225197 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 08 21:55:39.225886 master-0 kubenswrapper[3962]: I0308 21:55:39.225829 3962 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="00c624940da9759e4ef79321c55e8e016bd93b4aefccc302c3ae6d377c718b87" exitCode=1 Mar 08 21:55:39.225938 master-0 kubenswrapper[3962]: I0308 21:55:39.225889 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"00c624940da9759e4ef79321c55e8e016bd93b4aefccc302c3ae6d377c718b87"} Mar 08 21:55:39.225974 master-0 kubenswrapper[3962]: I0308 21:55:39.225950 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:39.226693 master-0 kubenswrapper[3962]: I0308 21:55:39.226662 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:39.226781 master-0 kubenswrapper[3962]: I0308 21:55:39.226700 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:39.226781 master-0 kubenswrapper[3962]: I0308 21:55:39.226714 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:39.227107 master-0 kubenswrapper[3962]: I0308 21:55:39.227054 3962 scope.go:117] "RemoveContainer" containerID="00c624940da9759e4ef79321c55e8e016bd93b4aefccc302c3ae6d377c718b87" Mar 08 21:55:39.240027 master-0 kubenswrapper[3962]: E0308 21:55:39.239949 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 08 21:55:39.330611 master-0 kubenswrapper[3962]: I0308 21:55:39.330542 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 21:55:39.332207 master-0 kubenswrapper[3962]: E0308 21:55:39.332174 3962 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:39.486139 master-0 kubenswrapper[3962]: I0308 21:55:39.486045 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:39.487493 master-0 kubenswrapper[3962]: I0308 21:55:39.487453 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:39.487558 master-0 kubenswrapper[3962]: I0308 21:55:39.487519 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:39.487558 master-0 kubenswrapper[3962]: I0308 21:55:39.487533 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:39.487695 master-0 kubenswrapper[3962]: I0308 21:55:39.487603 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:39.488769 master-0 kubenswrapper[3962]: E0308 21:55:39.488718 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 21:55:40.024723 master-0 kubenswrapper[3962]: I0308 21:55:40.024588 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:40.229637 master-0 kubenswrapper[3962]: I0308 21:55:40.229590 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 08 21:55:40.230503 master-0 kubenswrapper[3962]: I0308 21:55:40.230413 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 08 21:55:40.231348 master-0 kubenswrapper[3962]: I0308 21:55:40.230820 3962 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6a5bf9045d4914b5d354f57de8fe7de3c463e5c1dd963b35ba2ae400e73476cf" exitCode=1 Mar 08 21:55:40.231348 master-0 kubenswrapper[3962]: I0308 21:55:40.230905 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:40.231348 master-0 kubenswrapper[3962]: I0308 21:55:40.230918 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:40.231348 master-0 kubenswrapper[3962]: I0308 21:55:40.230935 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6a5bf9045d4914b5d354f57de8fe7de3c463e5c1dd963b35ba2ae400e73476cf"} Mar 08 21:55:40.231348 master-0 kubenswrapper[3962]: I0308 21:55:40.231038 3962 scope.go:117] "RemoveContainer" containerID="00c624940da9759e4ef79321c55e8e016bd93b4aefccc302c3ae6d377c718b87" Mar 08 21:55:40.231939 master-0 kubenswrapper[3962]: I0308 21:55:40.231701 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:40.231939 master-0 kubenswrapper[3962]: I0308 21:55:40.231761 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:40.231939 master-0 kubenswrapper[3962]: I0308 21:55:40.231785 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:40.231939 master-0 kubenswrapper[3962]: I0308 21:55:40.231820 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:40.231939 master-0 kubenswrapper[3962]: I0308 21:55:40.231844 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:40.231939 master-0 kubenswrapper[3962]: I0308 21:55:40.231856 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:40.232242 master-0 kubenswrapper[3962]: I0308 21:55:40.232219 3962 scope.go:117] "RemoveContainer" containerID="6a5bf9045d4914b5d354f57de8fe7de3c463e5c1dd963b35ba2ae400e73476cf" Mar 08 21:55:40.232461 master-0 kubenswrapper[3962]: E0308 21:55:40.232432 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 21:55:41.024628 master-0 kubenswrapper[3962]: I0308 21:55:41.024534 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:41.070622 master-0 kubenswrapper[3962]: W0308 21:55:41.070533 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:41.070778 master-0 kubenswrapper[3962]: E0308 21:55:41.070638 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:41.233620 master-0 kubenswrapper[3962]: I0308 21:55:41.233547 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:41.235107 master-0 kubenswrapper[3962]: I0308 21:55:41.235056 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:41.235155 master-0 kubenswrapper[3962]: I0308 21:55:41.235131 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:41.235155 master-0 kubenswrapper[3962]: I0308 21:55:41.235143 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:41.235740 master-0 kubenswrapper[3962]: I0308 21:55:41.235721 3962 scope.go:117] "RemoveContainer" containerID="6a5bf9045d4914b5d354f57de8fe7de3c463e5c1dd963b35ba2ae400e73476cf" Mar 08 21:55:41.235929 master-0 kubenswrapper[3962]: E0308 21:55:41.235902 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 21:55:41.404088 master-0 kubenswrapper[3962]: W0308 21:55:41.403952 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:41.404088 master-0 kubenswrapper[3962]: E0308 21:55:41.404062 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:41.567457 master-0 kubenswrapper[3962]: W0308 21:55:41.567334 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:41.567572 master-0 kubenswrapper[3962]: E0308 21:55:41.567463 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 21:55:42.024865 master-0 kubenswrapper[3962]: I0308 21:55:42.024713 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:43.024845 master-0 kubenswrapper[3962]: I0308 21:55:43.024743 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 21:55:43.160278 master-0 kubenswrapper[3962]: E0308 21:55:43.160049 3962 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 21:55:43.242069 master-0 kubenswrapper[3962]: I0308 21:55:43.242007 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 08 21:55:43.245094 master-0 kubenswrapper[3962]: I0308 21:55:43.245007 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"f50874fd44a38fe2052c0dd021aa5c5eab2b987367eeee5b46f35dae49f0f668"} Mar 08 21:55:43.245223 master-0 kubenswrapper[3962]: I0308 21:55:43.245191 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:43.246351 master-0 kubenswrapper[3962]: I0308 21:55:43.246313 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:43.246422 master-0 kubenswrapper[3962]: I0308 21:55:43.246380 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:43.246422 master-0 kubenswrapper[3962]: I0308 21:55:43.246404 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:43.249054 master-0 kubenswrapper[3962]: I0308 21:55:43.249006 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331"} Mar 08 21:55:43.251355 master-0 kubenswrapper[3962]: I0308 21:55:43.251310 3962 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="da776c7c3ffac41c9193152c13ad24a2c2d14135225b75898e7c53fb459df62b" exitCode=0 Mar 08 21:55:43.251415 master-0 kubenswrapper[3962]: I0308 21:55:43.251362 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"da776c7c3ffac41c9193152c13ad24a2c2d14135225b75898e7c53fb459df62b"} Mar 08 21:55:43.251529 master-0 kubenswrapper[3962]: I0308 21:55:43.251500 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:43.252659 master-0 kubenswrapper[3962]: I0308 21:55:43.252616 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:43.252719 master-0 kubenswrapper[3962]: I0308 21:55:43.252664 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:43.252719 master-0 kubenswrapper[3962]: I0308 21:55:43.252688 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:43.257055 master-0 kubenswrapper[3962]: I0308 21:55:43.257021 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:43.257863 master-0 kubenswrapper[3962]: I0308 21:55:43.257828 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:43.257903 master-0 kubenswrapper[3962]: I0308 21:55:43.257875 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:43.257903 master-0 kubenswrapper[3962]: I0308 21:55:43.257898 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:44.258445 master-0 kubenswrapper[3962]: I0308 21:55:44.257901 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"8d8ef0d2f7570923c4fa1a9617292413de2da9937c525cc65b8fbe3433d3ca3e"} Mar 08 21:55:44.258445 master-0 kubenswrapper[3962]: I0308 21:55:44.257955 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:44.259197 master-0 kubenswrapper[3962]: I0308 21:55:44.259162 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:44.259262 master-0 kubenswrapper[3962]: I0308 21:55:44.259207 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:44.259262 master-0 kubenswrapper[3962]: I0308 21:55:44.259218 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:44.925209 master-0 kubenswrapper[3962]: I0308 21:55:44.925136 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:44.925209 master-0 kubenswrapper[3962]: W0308 21:55:44.925150 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 08 21:55:44.925530 master-0 kubenswrapper[3962]: E0308 21:55:44.925245 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 21:55:45.028145 master-0 kubenswrapper[3962]: I0308 21:55:45.028056 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:45.648543 master-0 kubenswrapper[3962]: E0308 21:55:45.648455 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 21:55:45.886036 master-0 kubenswrapper[3962]: E0308 21:55:45.885853 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696b142f70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.021962096 +0000 UTC m=+0.655234328,LastTimestamp:2026-03-08 21:55:33.021962096 +0000 UTC m=+0.655234328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.890252 master-0 kubenswrapper[3962]: I0308 21:55:45.890207 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:45.891013 master-0 kubenswrapper[3962]: E0308 21:55:45.890936 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.891393 master-0 kubenswrapper[3962]: I0308 21:55:45.891340 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:45.891393 master-0 kubenswrapper[3962]: I0308 21:55:45.891393 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:45.891501 master-0 kubenswrapper[3962]: I0308 21:55:45.891405 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:45.891501 master-0 kubenswrapper[3962]: I0308 21:55:45.891463 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:45.895481 master-0 kubenswrapper[3962]: E0308 21:55:45.895321 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.895596 master-0 kubenswrapper[3962]: E0308 21:55:45.895434 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 08 21:55:45.901396 master-0 kubenswrapper[3962]: E0308 21:55:45.901259 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.905731 master-0 kubenswrapper[3962]: E0308 21:55:45.905648 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc6973f88902 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.171144962 +0000 UTC m=+0.804417184,LastTimestamp:2026-03-08 21:55:33.171144962 +0000 UTC m=+0.804417184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.912179 master-0 kubenswrapper[3962]: E0308 21:55:45.912110 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.260541854 +0000 UTC m=+0.893814066,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.933421 master-0 kubenswrapper[3962]: E0308 21:55:45.933279 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.260564094 +0000 UTC m=+0.893836296,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:45.971115 master-0 kubenswrapper[3962]: E0308 21:55:45.970533 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e811b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.260576014 +0000 UTC m=+0.893848226,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.008577 master-0 kubenswrapper[3962]: E0308 21:55:46.006811 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.287815268 +0000 UTC m=+0.921087470,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.030024 master-0 kubenswrapper[3962]: E0308 21:55:46.029644 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.287835069 +0000 UTC m=+0.921107271,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.035425 master-0 kubenswrapper[3962]: I0308 21:55:46.035235 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:46.035425 master-0 kubenswrapper[3962]: E0308 21:55:46.035229 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e811b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.287844559 +0000 UTC m=+0.921116761,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.046114 master-0 kubenswrapper[3962]: E0308 21:55:46.045032 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.289141554 +0000 UTC m=+0.922413756,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.049909 master-0 kubenswrapper[3962]: E0308 21:55:46.049782 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.289166584 +0000 UTC m=+0.922438796,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.054018 master-0 kubenswrapper[3962]: E0308 21:55:46.053912 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e811b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.289178675 +0000 UTC m=+0.922450887,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.058959 master-0 kubenswrapper[3962]: E0308 21:55:46.058869 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.289349737 +0000 UTC m=+0.922621939,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.063928 master-0 kubenswrapper[3962]: E0308 21:55:46.063790 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.289362777 +0000 UTC m=+0.922634979,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.068100 master-0 kubenswrapper[3962]: E0308 21:55:46.068019 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e811b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.289371747 +0000 UTC m=+0.922643949,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.072591 master-0 kubenswrapper[3962]: E0308 21:55:46.072417 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.290044725 +0000 UTC m=+0.923316937,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.097106 master-0 kubenswrapper[3962]: E0308 21:55:46.089421 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.290056195 +0000 UTC m=+0.923328407,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.102118 master-0 kubenswrapper[3962]: E0308 21:55:46.101441 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e811b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.290066946 +0000 UTC m=+0.923339158,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.112180 master-0 kubenswrapper[3962]: E0308 21:55:46.110698 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.29049195 +0000 UTC m=+0.923764152,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.115519 master-0 kubenswrapper[3962]: E0308 21:55:46.115407 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.29050297 +0000 UTC m=+0.923775162,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.125087 master-0 kubenswrapper[3962]: E0308 21:55:46.123479 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e811b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e811b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079432051 +0000 UTC m=+0.712704293,LastTimestamp:2026-03-08 21:55:33.290512191 +0000 UTC m=+0.923784393,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.133206 master-0 kubenswrapper[3962]: E0308 21:55:46.128125 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e8030bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e8030bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.079371967 +0000 UTC m=+0.712644209,LastTimestamp:2026-03-08 21:55:33.290861834 +0000 UTC m=+0.924134036,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.133206 master-0 kubenswrapper[3962]: E0308 21:55:46.131265 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189afc696e80c9e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189afc696e80c9e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:33.07941117 +0000 UTC m=+0.712683412,LastTimestamp:2026-03-08 21:55:33.290906185 +0000 UTC m=+0.924178387,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.136057 master-0 kubenswrapper[3962]: E0308 21:55:46.136001 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6a00756b1e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:35.52813955 +0000 UTC m=+3.161411792,LastTimestamp:2026-03-08 21:55:35.52813955 +0000 UTC m=+3.161411792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.139887 master-0 kubenswrapper[3962]: E0308 21:55:46.139833 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189afc6a00c574f5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:35.533384949 +0000 UTC m=+3.166657191,LastTimestamp:2026-03-08 21:55:35.533384949 +0000 UTC m=+3.166657191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.144549 master-0 kubenswrapper[3962]: E0308 21:55:46.144393 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6a03926441 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:35.580369985 +0000 UTC m=+3.213642227,LastTimestamp:2026-03-08 21:55:35.580369985 +0000 UTC m=+3.213642227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.149272 master-0 kubenswrapper[3962]: E0308 21:55:46.149193 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6a079d784b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:35.648204875 +0000 UTC m=+3.281477087,LastTimestamp:2026-03-08 21:55:35.648204875 +0000 UTC m=+3.281477087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.153621 master-0 kubenswrapper[3962]: E0308 21:55:46.153364 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6a0e3a5617 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:35.759148567 +0000 UTC m=+3.392420769,LastTimestamp:2026-03-08 21:55:35.759148567 +0000 UTC m=+3.392420769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.157418 master-0 kubenswrapper[3962]: E0308 21:55:46.157298 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6a5eb4ff19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 1.528s (1.528s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:37.109364505 +0000 UTC m=+4.742636707,LastTimestamp:2026-03-08 21:55:37.109364505 +0000 UTC m=+4.742636707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.161004 master-0 kubenswrapper[3962]: E0308 21:55:46.160816 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6a6b9ca180 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:37.325871488 +0000 UTC m=+4.959143690,LastTimestamp:2026-03-08 21:55:37.325871488 +0000 UTC m=+4.959143690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.164342 master-0 kubenswrapper[3962]: E0308 21:55:46.164208 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6a6c896867 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:37.341388903 +0000 UTC m=+4.974661115,LastTimestamp:2026-03-08 21:55:37.341388903 +0000 UTC m=+4.974661115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.169539 master-0 kubenswrapper[3962]: E0308 21:55:46.168943 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6aab986e19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.399338009 +0000 UTC m=+6.032610201,LastTimestamp:2026-03-08 21:55:38.399338009 +0000 UTC m=+6.032610201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.173957 master-0 kubenswrapper[3962]: E0308 21:55:46.173777 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6aae5203c4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 2.685s (2.685s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.445054916 +0000 UTC m=+6.078327118,LastTimestamp:2026-03-08 21:55:38.445054916 +0000 UTC m=+6.078327118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.179685 master-0 kubenswrapper[3962]: E0308 21:55:46.179504 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6ab7f960d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.607018196 +0000 UTC m=+6.240290398,LastTimestamp:2026-03-08 21:55:38.607018196 +0000 UTC m=+6.240290398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.184279 master-0 kubenswrapper[3962]: E0308 21:55:46.184182 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6ab9f65abc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.64037446 +0000 UTC m=+6.273646662,LastTimestamp:2026-03-08 21:55:38.64037446 +0000 UTC m=+6.273646662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.190228 master-0 kubenswrapper[3962]: E0308 21:55:46.190061 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6abaa6e944 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.651945284 +0000 UTC m=+6.285217486,LastTimestamp:2026-03-08 21:55:38.651945284 +0000 UTC m=+6.285217486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.194389 master-0 kubenswrapper[3962]: E0308 21:55:46.194217 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6abb81a2f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.666279666 +0000 UTC m=+6.299551868,LastTimestamp:2026-03-08 21:55:38.666279666 +0000 UTC m=+6.299551868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.199638 master-0 kubenswrapper[3962]: E0308 21:55:46.199529 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6abbbd52ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.670191306 +0000 UTC m=+6.303463498,LastTimestamp:2026-03-08 21:55:38.670191306 +0000 UTC m=+6.303463498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.205887 master-0 kubenswrapper[3962]: E0308 21:55:46.205704 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6ac712671f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.860316447 +0000 UTC m=+6.493588679,LastTimestamp:2026-03-08 21:55:38.860316447 +0000 UTC m=+6.493588679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.209788 master-0 kubenswrapper[3962]: E0308 21:55:46.209615 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc6ac82ea4cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.87894446 +0000 UTC m=+6.512216672,LastTimestamp:2026-03-08 21:55:38.87894446 +0000 UTC m=+6.512216672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.215615 master-0 kubenswrapper[3962]: E0308 21:55:46.215102 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6aab986e19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6aab986e19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.399338009 +0000 UTC m=+6.032610201,LastTimestamp:2026-03-08 21:55:39.230321564 +0000 UTC m=+6.863593806,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.219983 master-0 kubenswrapper[3962]: E0308 21:55:46.219821 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6ab7f960d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6ab7f960d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.607018196 +0000 UTC m=+6.240290398,LastTimestamp:2026-03-08 21:55:39.669119421 +0000 UTC m=+7.302391623,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.224195 master-0 kubenswrapper[3962]: E0308 21:55:46.224019 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6ab9f65abc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6ab9f65abc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.64037446 +0000 UTC m=+6.273646662,LastTimestamp:2026-03-08 21:55:39.689159988 +0000 UTC m=+7.322432190,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.230927 master-0 kubenswrapper[3962]: E0308 21:55:46.228470 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6b18dab153 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:40.232397139 +0000 UTC m=+7.865669341,LastTimestamp:2026-03-08 21:55:40.232397139 +0000 UTC m=+7.865669341,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.234295 master-0 kubenswrapper[3962]: E0308 21:55:46.234163 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6b18dab153\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6b18dab153 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:40.232397139 +0000 UTC m=+7.865669341,LastTimestamp:2026-03-08 21:55:41.235865071 +0000 UTC m=+8.869137273,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.243303 master-0 kubenswrapper[3962]: E0308 21:55:46.243124 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6ba992117d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.132s (7.132s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.660333949 +0000 UTC m=+10.293606161,LastTimestamp:2026-03-08 21:55:42.660333949 +0000 UTC m=+10.293606161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.248313 master-0 kubenswrapper[3962]: E0308 21:55:46.248207 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6baab8a408 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.031s (7.031s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.679639048 +0000 UTC m=+10.312911270,LastTimestamp:2026-03-08 21:55:42.679639048 +0000 UTC m=+10.312911270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.252655 master-0 kubenswrapper[3962]: E0308 21:55:46.252486 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189afc6badc6801d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.197s (7.197s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.730879005 +0000 UTC m=+10.364151207,LastTimestamp:2026-03-08 21:55:42.730879005 +0000 UTC m=+10.364151207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.262321 master-0 kubenswrapper[3962]: E0308 21:55:46.262222 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6bb6b90849 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.880991305 +0000 UTC m=+10.514263497,LastTimestamp:2026-03-08 21:55:42.880991305 +0000 UTC m=+10.514263497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.266654 master-0 kubenswrapper[3962]: E0308 21:55:46.266378 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6bb75b8bb9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.891641785 +0000 UTC m=+10.524913997,LastTimestamp:2026-03-08 21:55:42.891641785 +0000 UTC m=+10.524913997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.271870 master-0 kubenswrapper[3962]: E0308 21:55:46.271696 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6bb7860a83 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.894426755 +0000 UTC m=+10.527698957,LastTimestamp:2026-03-08 21:55:42.894426755 +0000 UTC m=+10.527698957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.275923 master-0 kubenswrapper[3962]: E0308 21:55:46.275850 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6bb80cd110 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.903259408 +0000 UTC m=+10.536531610,LastTimestamp:2026-03-08 21:55:42.903259408 +0000 UTC m=+10.536531610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.279954 master-0 kubenswrapper[3962]: E0308 21:55:46.279811 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6bb8f22fe5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:42.918291429 +0000 UTC m=+10.551563631,LastTimestamp:2026-03-08 21:55:42.918291429 +0000 UTC m=+10.551563631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.284207 master-0 kubenswrapper[3962]: E0308 21:55:46.284032 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189afc6bbfa734eb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:43.030818027 +0000 UTC m=+10.664090259,LastTimestamp:2026-03-08 21:55:43.030818027 +0000 UTC m=+10.664090259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.287814 master-0 kubenswrapper[3962]: E0308 21:55:46.287750 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189afc6bc0565a62 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:43.042296418 +0000 UTC m=+10.675568630,LastTimestamp:2026-03-08 21:55:43.042296418 +0000 UTC m=+10.675568630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.292032 master-0 kubenswrapper[3962]: E0308 21:55:46.291930 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6bcd216dd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:43.2569318 +0000 UTC m=+10.890204042,LastTimestamp:2026-03-08 21:55:43.2569318 +0000 UTC m=+10.890204042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.296314 master-0 kubenswrapper[3962]: E0308 21:55:46.296213 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6bdaf81770 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:43.489103728 +0000 UTC m=+11.122375930,LastTimestamp:2026-03-08 21:55:43.489103728 +0000 UTC m=+11.122375930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.300038 master-0 kubenswrapper[3962]: E0308 21:55:46.299923 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6bdba1998b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:43.500212619 +0000 UTC m=+11.133484831,LastTimestamp:2026-03-08 21:55:43.500212619 +0000 UTC m=+11.133484831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.303766 master-0 kubenswrapper[3962]: E0308 21:55:46.303679 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6bdbb96146 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:43.501771078 +0000 UTC m=+11.135043280,LastTimestamp:2026-03-08 21:55:43.501771078 +0000 UTC m=+11.135043280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.532806 master-0 kubenswrapper[3962]: E0308 21:55:46.532684 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6c8ffadcf4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 3.631s (3.631s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:46.52596146 +0000 UTC m=+14.159233682,LastTimestamp:2026-03-08 21:55:46.52596146 +0000 UTC m=+14.159233682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.538379 master-0 kubenswrapper[3962]: E0308 21:55:46.538293 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6c9066f588 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 3.031s (3.031s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:46.53304564 +0000 UTC m=+14.166317852,LastTimestamp:2026-03-08 21:55:46.53304564 +0000 UTC m=+14.166317852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.711661 master-0 kubenswrapper[3962]: E0308 21:55:46.711493 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6c9abee042 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:46.706579522 +0000 UTC m=+14.339851744,LastTimestamp:2026-03-08 21:55:46.706579522 +0000 UTC m=+14.339851744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.725306 master-0 kubenswrapper[3962]: E0308 21:55:46.725155 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189afc6c9b8cc54a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:46.720073034 +0000 UTC m=+14.353345236,LastTimestamp:2026-03-08 21:55:46.720073034 +0000 UTC m=+14.353345236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.732015 master-0 kubenswrapper[3962]: E0308 21:55:46.731537 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6c9bfaba21 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:46.727279137 +0000 UTC m=+14.360551359,LastTimestamp:2026-03-08 21:55:46.727279137 +0000 UTC m=+14.360551359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:46.743916 master-0 kubenswrapper[3962]: E0308 21:55:46.743761 3962 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189afc6c9c8c7099 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:46.736828569 +0000 UTC m=+14.370100781,LastTimestamp:2026-03-08 21:55:46.736828569 +0000 UTC m=+14.370100781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:47.033223 master-0 kubenswrapper[3962]: I0308 21:55:47.033161 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:47.295536 master-0 kubenswrapper[3962]: I0308 21:55:47.295361 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"81880effd0e6f8229eefecfa74f76d169bbd4c02b4efe891a8b85181d0ccd2ca"} Mar 08 21:55:47.295536 master-0 kubenswrapper[3962]: I0308 21:55:47.295394 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:47.298995 master-0 kubenswrapper[3962]: I0308 21:55:47.298952 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:47.299113 master-0 kubenswrapper[3962]: I0308 21:55:47.299001 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:47.299113 master-0 kubenswrapper[3962]: I0308 21:55:47.299016 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:47.302050 master-0 kubenswrapper[3962]: I0308 21:55:47.302003 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674"} Mar 08 21:55:47.302460 master-0 kubenswrapper[3962]: I0308 21:55:47.302430 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:47.303723 master-0 kubenswrapper[3962]: I0308 21:55:47.303685 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:47.303723 master-0 kubenswrapper[3962]: I0308 21:55:47.303713 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:47.303723 master-0 kubenswrapper[3962]: I0308 21:55:47.303724 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:47.896501 master-0 kubenswrapper[3962]: I0308 21:55:47.896416 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 21:55:47.920498 master-0 kubenswrapper[3962]: I0308 21:55:47.920405 3962 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 08 21:55:47.978761 master-0 kubenswrapper[3962]: I0308 21:55:47.978715 3962 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:47.986744 master-0 kubenswrapper[3962]: I0308 21:55:47.986659 3962 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:48.031387 master-0 kubenswrapper[3962]: I0308 21:55:48.031262 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:48.305689 master-0 kubenswrapper[3962]: I0308 21:55:48.305513 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:48.305689 master-0 kubenswrapper[3962]: I0308 21:55:48.305619 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:48.306183 master-0 kubenswrapper[3962]: I0308 21:55:48.306134 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:48.306761 master-0 kubenswrapper[3962]: I0308 21:55:48.306706 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:48.306870 master-0 kubenswrapper[3962]: I0308 21:55:48.306769 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:48.306870 master-0 kubenswrapper[3962]: I0308 21:55:48.306790 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:48.307489 master-0 kubenswrapper[3962]: I0308 21:55:48.307439 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:48.307555 master-0 kubenswrapper[3962]: I0308 21:55:48.307492 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:48.307555 master-0 kubenswrapper[3962]: I0308 21:55:48.307508 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:48.311940 master-0 kubenswrapper[3962]: I0308 21:55:48.311900 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:55:49.029747 master-0 kubenswrapper[3962]: I0308 21:55:49.029644 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:49.308744 master-0 kubenswrapper[3962]: I0308 21:55:49.308535 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:49.310179 master-0 kubenswrapper[3962]: I0308 21:55:49.310104 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:49.310269 master-0 kubenswrapper[3962]: I0308 21:55:49.310198 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:49.310269 master-0 kubenswrapper[3962]: I0308 21:55:49.310223 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:50.030119 master-0 kubenswrapper[3962]: I0308 21:55:50.030007 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:50.212951 master-0 kubenswrapper[3962]: I0308 21:55:50.212874 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:50.213695 master-0 kubenswrapper[3962]: I0308 21:55:50.213668 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:50.215559 master-0 kubenswrapper[3962]: I0308 21:55:50.215489 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:50.215796 master-0 kubenswrapper[3962]: I0308 21:55:50.215765 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:50.215974 master-0 kubenswrapper[3962]: I0308 21:55:50.215944 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:50.311312 master-0 kubenswrapper[3962]: I0308 21:55:50.311156 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:50.317962 master-0 kubenswrapper[3962]: I0308 21:55:50.317800 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:50.317962 master-0 kubenswrapper[3962]: I0308 21:55:50.317863 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:50.317962 master-0 kubenswrapper[3962]: I0308 21:55:50.317880 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:50.381631 master-0 kubenswrapper[3962]: W0308 21:55:50.381541 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 08 21:55:50.381631 master-0 kubenswrapper[3962]: E0308 21:55:50.381634 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 08 21:55:51.030551 master-0 kubenswrapper[3962]: I0308 21:55:51.030490 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:51.701967 master-0 kubenswrapper[3962]: I0308 21:55:51.701845 3962 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:51.702672 master-0 kubenswrapper[3962]: I0308 21:55:51.702052 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:51.704404 master-0 kubenswrapper[3962]: I0308 21:55:51.704323 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:51.704404 master-0 kubenswrapper[3962]: I0308 21:55:51.704390 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:51.704526 master-0 kubenswrapper[3962]: I0308 21:55:51.704422 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:51.709098 master-0 kubenswrapper[3962]: I0308 21:55:51.709048 3962 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:51.965491 master-0 kubenswrapper[3962]: W0308 21:55:51.965321 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 08 21:55:51.965491 master-0 kubenswrapper[3962]: E0308 21:55:51.965400 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 08 21:55:52.018242 master-0 kubenswrapper[3962]: W0308 21:55:52.018159 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:52.018465 master-0 kubenswrapper[3962]: E0308 21:55:52.018258 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 21:55:52.028704 master-0 kubenswrapper[3962]: I0308 21:55:52.028646 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:52.186893 master-0 kubenswrapper[3962]: I0308 21:55:52.186822 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:52.188662 master-0 kubenswrapper[3962]: I0308 21:55:52.188564 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:52.188728 master-0 kubenswrapper[3962]: I0308 21:55:52.188682 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:52.188728 master-0 kubenswrapper[3962]: I0308 21:55:52.188711 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:52.189530 master-0 kubenswrapper[3962]: I0308 21:55:52.189483 3962 scope.go:117] "RemoveContainer" containerID="6a5bf9045d4914b5d354f57de8fe7de3c463e5c1dd963b35ba2ae400e73476cf" Mar 08 21:55:52.204576 master-0 kubenswrapper[3962]: E0308 21:55:52.204456 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6aab986e19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6aab986e19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.399338009 +0000 UTC m=+6.032610201,LastTimestamp:2026-03-08 21:55:52.194023528 +0000 UTC m=+19.827295770,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:52.317031 master-0 kubenswrapper[3962]: I0308 21:55:52.316371 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:52.317736 master-0 kubenswrapper[3962]: I0308 21:55:52.317684 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:52.317902 master-0 kubenswrapper[3962]: I0308 21:55:52.317759 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:52.317902 master-0 kubenswrapper[3962]: I0308 21:55:52.317777 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:52.321258 master-0 kubenswrapper[3962]: I0308 21:55:52.321219 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:52.467270 master-0 kubenswrapper[3962]: E0308 21:55:52.467010 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6ab7f960d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6ab7f960d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.607018196 +0000 UTC m=+6.240290398,LastTimestamp:2026-03-08 21:55:52.460102113 +0000 UTC m=+20.093374325,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:52.479485 master-0 kubenswrapper[3962]: E0308 21:55:52.479244 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6ab9f65abc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6ab9f65abc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:38.64037446 +0000 UTC m=+6.273646662,LastTimestamp:2026-03-08 21:55:52.472678432 +0000 UTC m=+20.105950644,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:52.658128 master-0 kubenswrapper[3962]: E0308 21:55:52.658012 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 21:55:52.895782 master-0 kubenswrapper[3962]: I0308 21:55:52.895678 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:52.897560 master-0 kubenswrapper[3962]: I0308 21:55:52.897406 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:52.897560 master-0 kubenswrapper[3962]: I0308 21:55:52.897492 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:52.897560 master-0 kubenswrapper[3962]: I0308 21:55:52.897511 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:52.897560 master-0 kubenswrapper[3962]: I0308 21:55:52.897605 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:52.905854 master-0 kubenswrapper[3962]: E0308 21:55:52.905778 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 08 21:55:53.031249 master-0 kubenswrapper[3962]: I0308 21:55:53.031102 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:53.161229 master-0 kubenswrapper[3962]: E0308 21:55:53.161111 3962 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 21:55:53.320747 master-0 kubenswrapper[3962]: I0308 21:55:53.320684 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 21:55:53.321719 master-0 kubenswrapper[3962]: I0308 21:55:53.321353 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 08 21:55:53.321874 master-0 kubenswrapper[3962]: I0308 21:55:53.321828 3962 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b" exitCode=1 Mar 08 21:55:53.321989 master-0 kubenswrapper[3962]: I0308 21:55:53.321918 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b"} Mar 08 21:55:53.322048 master-0 kubenswrapper[3962]: I0308 21:55:53.322030 3962 scope.go:117] "RemoveContainer" containerID="6a5bf9045d4914b5d354f57de8fe7de3c463e5c1dd963b35ba2ae400e73476cf" Mar 08 21:55:53.322120 master-0 kubenswrapper[3962]: I0308 21:55:53.321949 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:53.322274 master-0 kubenswrapper[3962]: I0308 21:55:53.322233 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:53.323265 master-0 kubenswrapper[3962]: I0308 21:55:53.323224 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:53.323341 master-0 kubenswrapper[3962]: I0308 21:55:53.323276 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:53.323341 master-0 kubenswrapper[3962]: I0308 21:55:53.323295 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:53.323506 master-0 kubenswrapper[3962]: I0308 21:55:53.323448 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:53.323562 master-0 kubenswrapper[3962]: I0308 21:55:53.323510 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:53.323562 master-0 kubenswrapper[3962]: I0308 21:55:53.323526 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:53.323810 master-0 kubenswrapper[3962]: I0308 21:55:53.323770 3962 scope.go:117] "RemoveContainer" containerID="f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b" Mar 08 21:55:53.324112 master-0 kubenswrapper[3962]: E0308 21:55:53.324054 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 21:55:53.331345 master-0 kubenswrapper[3962]: E0308 21:55:53.331207 3962 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189afc6b18dab153\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189afc6b18dab153 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:55:40.232397139 +0000 UTC m=+7.865669341,LastTimestamp:2026-03-08 21:55:53.323966651 +0000 UTC m=+20.957238873,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:55:53.988162 master-0 kubenswrapper[3962]: I0308 21:55:53.987897 3962 csr.go:261] certificate signing request csr-l2kxl is approved, waiting to be issued Mar 08 21:55:54.031596 master-0 kubenswrapper[3962]: I0308 21:55:54.031513 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:54.327379 master-0 kubenswrapper[3962]: I0308 21:55:54.327250 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 21:55:55.035346 master-0 kubenswrapper[3962]: I0308 21:55:55.035209 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:55.408474 master-0 kubenswrapper[3962]: I0308 21:55:55.408358 3962 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:55.409359 master-0 kubenswrapper[3962]: I0308 21:55:55.408576 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:55.409924 master-0 kubenswrapper[3962]: I0308 21:55:55.409868 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:55.409924 master-0 kubenswrapper[3962]: I0308 21:55:55.409917 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:55.410058 master-0 kubenswrapper[3962]: I0308 21:55:55.409939 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:55.416491 master-0 kubenswrapper[3962]: I0308 21:55:55.416416 3962 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:56.029874 master-0 kubenswrapper[3962]: I0308 21:55:56.029755 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:56.032209 master-0 kubenswrapper[3962]: I0308 21:55:56.032140 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:56.039158 master-0 kubenswrapper[3962]: I0308 21:55:56.039105 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:55:56.260986 master-0 kubenswrapper[3962]: W0308 21:55:56.260862 3962 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 08 21:55:56.260986 master-0 kubenswrapper[3962]: E0308 21:55:56.260983 3962 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 21:55:56.332905 master-0 kubenswrapper[3962]: I0308 21:55:56.332810 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:56.334432 master-0 kubenswrapper[3962]: I0308 21:55:56.334382 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:56.334432 master-0 kubenswrapper[3962]: I0308 21:55:56.334427 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:56.334582 master-0 kubenswrapper[3962]: I0308 21:55:56.334447 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:57.032606 master-0 kubenswrapper[3962]: I0308 21:55:57.032498 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:57.335254 master-0 kubenswrapper[3962]: I0308 21:55:57.335190 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:57.336683 master-0 kubenswrapper[3962]: I0308 21:55:57.336646 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:57.336789 master-0 kubenswrapper[3962]: I0308 21:55:57.336778 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:57.336848 master-0 kubenswrapper[3962]: I0308 21:55:57.336839 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:58.030310 master-0 kubenswrapper[3962]: I0308 21:55:58.030253 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:59.030090 master-0 kubenswrapper[3962]: I0308 21:55:59.030003 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:55:59.667223 master-0 kubenswrapper[3962]: E0308 21:55:59.667135 3962 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 21:55:59.906341 master-0 kubenswrapper[3962]: I0308 21:55:59.906274 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:55:59.908368 master-0 kubenswrapper[3962]: I0308 21:55:59.908333 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:55:59.908575 master-0 kubenswrapper[3962]: I0308 21:55:59.908551 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:55:59.908698 master-0 kubenswrapper[3962]: I0308 21:55:59.908678 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:55:59.908900 master-0 kubenswrapper[3962]: I0308 21:55:59.908877 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:55:59.916767 master-0 kubenswrapper[3962]: E0308 21:55:59.916732 3962 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 08 21:56:00.030567 master-0 kubenswrapper[3962]: I0308 21:56:00.030337 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:56:01.031122 master-0 kubenswrapper[3962]: I0308 21:56:01.031011 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:56:02.033118 master-0 kubenswrapper[3962]: I0308 21:56:02.033014 3962 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 21:56:02.588160 master-0 kubenswrapper[3962]: I0308 21:56:02.588098 3962 csr.go:257] certificate signing request csr-l2kxl is issued Mar 08 21:56:02.890888 master-0 kubenswrapper[3962]: I0308 21:56:02.890747 3962 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 08 21:56:03.038005 master-0 kubenswrapper[3962]: I0308 21:56:03.037954 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.053045 master-0 kubenswrapper[3962]: I0308 21:56:03.052999 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.112524 master-0 kubenswrapper[3962]: I0308 21:56:03.112451 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.162173 master-0 kubenswrapper[3962]: E0308 21:56:03.161980 3962 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 21:56:03.390799 master-0 kubenswrapper[3962]: I0308 21:56:03.390691 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.390799 master-0 kubenswrapper[3962]: E0308 21:56:03.390756 3962 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 21:56:03.414430 master-0 kubenswrapper[3962]: I0308 21:56:03.414275 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.430704 master-0 kubenswrapper[3962]: I0308 21:56:03.430649 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.487370 master-0 kubenswrapper[3962]: I0308 21:56:03.487288 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.589760 master-0 kubenswrapper[3962]: I0308 21:56:03.589652 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 15:22:32.771290095 +0000 UTC Mar 08 21:56:03.589760 master-0 kubenswrapper[3962]: I0308 21:56:03.589730 3962 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h26m29.181565282s for next certificate rotation Mar 08 21:56:03.766364 master-0 kubenswrapper[3962]: I0308 21:56:03.766193 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.766364 master-0 kubenswrapper[3962]: E0308 21:56:03.766267 3962 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 21:56:03.878082 master-0 kubenswrapper[3962]: I0308 21:56:03.878004 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.904219 master-0 kubenswrapper[3962]: I0308 21:56:03.904165 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:03.961468 master-0 kubenswrapper[3962]: I0308 21:56:03.961428 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:04.186746 master-0 kubenswrapper[3962]: I0308 21:56:04.186662 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:56:04.189387 master-0 kubenswrapper[3962]: I0308 21:56:04.189318 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:56:04.189387 master-0 kubenswrapper[3962]: I0308 21:56:04.189372 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:56:04.189387 master-0 kubenswrapper[3962]: I0308 21:56:04.189390 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:56:04.190049 master-0 kubenswrapper[3962]: I0308 21:56:04.189992 3962 scope.go:117] "RemoveContainer" containerID="f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b" Mar 08 21:56:04.190404 master-0 kubenswrapper[3962]: E0308 21:56:04.190343 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 21:56:04.234658 master-0 kubenswrapper[3962]: I0308 21:56:04.234597 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:04.234658 master-0 kubenswrapper[3962]: E0308 21:56:04.234634 3962 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 21:56:04.794602 master-0 kubenswrapper[3962]: I0308 21:56:04.794535 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:04.813441 master-0 kubenswrapper[3962]: I0308 21:56:04.813383 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:04.871217 master-0 kubenswrapper[3962]: I0308 21:56:04.871145 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:05.140350 master-0 kubenswrapper[3962]: I0308 21:56:05.140269 3962 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 21:56:05.140350 master-0 kubenswrapper[3962]: E0308 21:56:05.140329 3962 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 21:56:06.675566 master-0 kubenswrapper[3962]: E0308 21:56:06.675489 3962 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 08 21:56:06.917617 master-0 kubenswrapper[3962]: I0308 21:56:06.917472 3962 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:56:06.919578 master-0 kubenswrapper[3962]: I0308 21:56:06.919504 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:56:06.920023 master-0 kubenswrapper[3962]: I0308 21:56:06.919590 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:56:06.920023 master-0 kubenswrapper[3962]: I0308 21:56:06.919622 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:56:06.920023 master-0 kubenswrapper[3962]: I0308 21:56:06.919739 3962 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:56:06.933306 master-0 kubenswrapper[3962]: I0308 21:56:06.933154 3962 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 08 21:56:06.933306 master-0 kubenswrapper[3962]: E0308 21:56:06.933232 3962 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 08 21:56:06.949860 master-0 kubenswrapper[3962]: E0308 21:56:06.949805 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.047360 master-0 kubenswrapper[3962]: I0308 21:56:07.047248 3962 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 08 21:56:07.050621 master-0 kubenswrapper[3962]: E0308 21:56:07.050578 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.064945 master-0 kubenswrapper[3962]: I0308 21:56:07.064818 3962 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 08 21:56:07.151734 master-0 kubenswrapper[3962]: E0308 21:56:07.151645 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.252203 master-0 kubenswrapper[3962]: E0308 21:56:07.252023 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.352924 master-0 kubenswrapper[3962]: E0308 21:56:07.352832 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.453327 master-0 kubenswrapper[3962]: E0308 21:56:07.453200 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.553712 master-0 kubenswrapper[3962]: E0308 21:56:07.553532 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.654464 master-0 kubenswrapper[3962]: E0308 21:56:07.654349 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.755251 master-0 kubenswrapper[3962]: E0308 21:56:07.755145 3962 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:56:07.839415 master-0 kubenswrapper[3962]: I0308 21:56:07.839337 3962 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 21:56:08.020894 master-0 kubenswrapper[3962]: I0308 21:56:08.020767 3962 apiserver.go:52] "Watching apiserver" Mar 08 21:56:08.025023 master-0 kubenswrapper[3962]: I0308 21:56:08.024933 3962 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 21:56:08.025242 master-0 kubenswrapper[3962]: I0308 21:56:08.025167 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Mar 08 21:56:08.029056 master-0 kubenswrapper[3962]: I0308 21:56:08.028970 3962 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 08 21:56:08.230533 master-0 kubenswrapper[3962]: I0308 21:56:08.230381 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7c649bf6d4-znt8q"] Mar 08 21:56:08.230896 master-0 kubenswrapper[3962]: I0308 21:56:08.230854 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.234395 master-0 kubenswrapper[3962]: I0308 21:56:08.234342 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 21:56:08.234874 master-0 kubenswrapper[3962]: I0308 21:56:08.234835 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 21:56:08.237519 master-0 kubenswrapper[3962]: I0308 21:56:08.237459 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 21:56:08.374911 master-0 kubenswrapper[3962]: I0308 21:56:08.374796 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.374911 master-0 kubenswrapper[3962]: I0308 21:56:08.374862 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.374911 master-0 kubenswrapper[3962]: I0308 21:56:08.374908 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.475432 master-0 kubenswrapper[3962]: I0308 21:56:08.475289 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.475432 master-0 kubenswrapper[3962]: I0308 21:56:08.475393 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.476819 master-0 kubenswrapper[3962]: I0308 21:56:08.475698 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.476968 master-0 kubenswrapper[3962]: I0308 21:56:08.476333 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.477392 master-0 kubenswrapper[3962]: I0308 21:56:08.477288 3962 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 21:56:08.488516 master-0 kubenswrapper[3962]: I0308 21:56:08.488400 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.505927 master-0 kubenswrapper[3962]: I0308 21:56:08.505831 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:08.563232 master-0 kubenswrapper[3962]: I0308 21:56:08.563043 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:56:09.020845 master-0 kubenswrapper[3962]: I0308 21:56:09.020721 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8"] Mar 08 21:56:09.022121 master-0 kubenswrapper[3962]: I0308 21:56:09.021203 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.024287 master-0 kubenswrapper[3962]: I0308 21:56:09.024220 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 21:56:09.025132 master-0 kubenswrapper[3962]: I0308 21:56:09.025039 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 21:56:09.025446 master-0 kubenswrapper[3962]: I0308 21:56:09.025396 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 21:56:09.080181 master-0 kubenswrapper[3962]: I0308 21:56:09.080053 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.080181 master-0 kubenswrapper[3962]: I0308 21:56:09.080166 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.080567 master-0 kubenswrapper[3962]: I0308 21:56:09.080380 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.080567 master-0 kubenswrapper[3962]: I0308 21:56:09.080469 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.080567 master-0 kubenswrapper[3962]: I0308 21:56:09.080502 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.181377 master-0 kubenswrapper[3962]: I0308 21:56:09.181269 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.181701 master-0 kubenswrapper[3962]: I0308 21:56:09.181557 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.181701 master-0 kubenswrapper[3962]: I0308 21:56:09.181607 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.181786 master-0 kubenswrapper[3962]: E0308 21:56:09.181640 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:09.181786 master-0 kubenswrapper[3962]: I0308 21:56:09.181747 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.181861 master-0 kubenswrapper[3962]: I0308 21:56:09.181684 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.182018 master-0 kubenswrapper[3962]: E0308 21:56:09.181973 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:09.681825278 +0000 UTC m=+37.315097670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:09.182142 master-0 kubenswrapper[3962]: I0308 21:56:09.182060 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.182309 master-0 kubenswrapper[3962]: I0308 21:56:09.182256 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.183215 master-0 kubenswrapper[3962]: I0308 21:56:09.182994 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.243140 master-0 kubenswrapper[3962]: I0308 21:56:09.232545 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.251666 master-0 kubenswrapper[3962]: I0308 21:56:09.251573 3962 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 21:56:09.372023 master-0 kubenswrapper[3962]: I0308 21:56:09.371944 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerStarted","Data":"c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d"} Mar 08 21:56:09.686409 master-0 kubenswrapper[3962]: I0308 21:56:09.686235 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:09.686653 master-0 kubenswrapper[3962]: E0308 21:56:09.686464 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:09.686653 master-0 kubenswrapper[3962]: E0308 21:56:09.686577 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:10.686546214 +0000 UTC m=+38.319818446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:10.510047 master-0 kubenswrapper[3962]: I0308 21:56:10.509927 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-kxkrl"] Mar 08 21:56:10.510700 master-0 kubenswrapper[3962]: I0308 21:56:10.510469 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.513330 master-0 kubenswrapper[3962]: I0308 21:56:10.513275 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 08 21:56:10.513580 master-0 kubenswrapper[3962]: I0308 21:56:10.513528 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 08 21:56:10.513676 master-0 kubenswrapper[3962]: I0308 21:56:10.513644 3962 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 08 21:56:10.514168 master-0 kubenswrapper[3962]: I0308 21:56:10.514129 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 08 21:56:10.593480 master-0 kubenswrapper[3962]: I0308 21:56:10.593377 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-resolv-conf\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.593480 master-0 kubenswrapper[3962]: I0308 21:56:10.593454 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrprg\" (UniqueName: \"kubernetes.io/projected/0a43561f-bdde-456b-b4a4-2055d4fe6880-kube-api-access-vrprg\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.593480 master-0 kubenswrapper[3962]: I0308 21:56:10.593491 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-var-run-resolv-conf\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.593788 master-0 kubenswrapper[3962]: I0308 21:56:10.593557 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-ca-bundle\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.593788 master-0 kubenswrapper[3962]: I0308 21:56:10.593739 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-sno-bootstrap-files\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.694547 master-0 kubenswrapper[3962]: I0308 21:56:10.694471 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:10.694547 master-0 kubenswrapper[3962]: I0308 21:56:10.694530 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-ca-bundle\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.694547 master-0 kubenswrapper[3962]: I0308 21:56:10.694560 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-sno-bootstrap-files\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.694861 master-0 kubenswrapper[3962]: E0308 21:56:10.694777 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:10.694861 master-0 kubenswrapper[3962]: I0308 21:56:10.694802 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-resolv-conf\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.694861 master-0 kubenswrapper[3962]: I0308 21:56:10.694834 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-ca-bundle\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.694978 master-0 kubenswrapper[3962]: E0308 21:56:10.694942 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:12.694853098 +0000 UTC m=+40.328125340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:10.694978 master-0 kubenswrapper[3962]: I0308 21:56:10.694942 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-resolv-conf\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.695094 master-0 kubenswrapper[3962]: I0308 21:56:10.694994 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrprg\" (UniqueName: \"kubernetes.io/projected/0a43561f-bdde-456b-b4a4-2055d4fe6880-kube-api-access-vrprg\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.695094 master-0 kubenswrapper[3962]: I0308 21:56:10.695057 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-var-run-resolv-conf\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.695174 master-0 kubenswrapper[3962]: I0308 21:56:10.695094 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-sno-bootstrap-files\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.695174 master-0 kubenswrapper[3962]: I0308 21:56:10.695163 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-var-run-resolv-conf\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.722630 master-0 kubenswrapper[3962]: I0308 21:56:10.722580 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrprg\" (UniqueName: \"kubernetes.io/projected/0a43561f-bdde-456b-b4a4-2055d4fe6880-kube-api-access-vrprg\") pod \"assisted-installer-controller-kxkrl\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.843379 master-0 kubenswrapper[3962]: I0308 21:56:10.843286 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:10.861573 master-0 kubenswrapper[3962]: W0308 21:56:10.861527 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a43561f_bdde_456b_b4a4_2055d4fe6880.slice/crio-996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034 WatchSource:0}: Error finding container 996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034: Status 404 returned error can't find the container with id 996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034 Mar 08 21:56:11.326348 master-0 kubenswrapper[3962]: I0308 21:56:11.326283 3962 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 21:56:11.379904 master-0 kubenswrapper[3962]: I0308 21:56:11.379803 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kxkrl" event={"ID":"0a43561f-bdde-456b-b4a4-2055d4fe6880","Type":"ContainerStarted","Data":"996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034"} Mar 08 21:56:12.711834 master-0 kubenswrapper[3962]: I0308 21:56:12.711687 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:12.714191 master-0 kubenswrapper[3962]: E0308 21:56:12.711995 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:12.714191 master-0 kubenswrapper[3962]: E0308 21:56:12.712162 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:16.712124892 +0000 UTC m=+44.345397114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:13.241437 master-0 kubenswrapper[3962]: I0308 21:56:13.241356 3962 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 21:56:14.394744 master-0 kubenswrapper[3962]: I0308 21:56:14.394639 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerStarted","Data":"33e74f7c7bc9716ac9cd2cfb19a68cc948644c1413dc78e99dffc063fbe5f927"} Mar 08 21:56:14.955902 master-0 kubenswrapper[3962]: I0308 21:56:14.955863 3962 csr.go:261] certificate signing request csr-wpjf7 is approved, waiting to be issued Mar 08 21:56:14.967029 master-0 kubenswrapper[3962]: I0308 21:56:14.966982 3962 csr.go:257] certificate signing request csr-wpjf7 is issued Mar 08 21:56:15.534458 master-0 kubenswrapper[3962]: I0308 21:56:15.534338 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" podStartSLOduration=2.781246963 podStartE2EDuration="7.53430664s" podCreationTimestamp="2026-03-08 21:56:08 +0000 UTC" firstStartedPulling="2026-03-08 21:56:08.589313789 +0000 UTC m=+36.222586021" lastFinishedPulling="2026-03-08 21:56:13.342373456 +0000 UTC m=+40.975645698" observedRunningTime="2026-03-08 21:56:14.423713098 +0000 UTC m=+42.056985330" watchObservedRunningTime="2026-03-08 21:56:15.53430664 +0000 UTC m=+43.167578852" Mar 08 21:56:15.536943 master-0 kubenswrapper[3962]: I0308 21:56:15.534691 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-zgm9p"] Mar 08 21:56:15.536943 master-0 kubenswrapper[3962]: I0308 21:56:15.535224 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:15.636966 master-0 kubenswrapper[3962]: I0308 21:56:15.636871 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh75n\" (UniqueName: \"kubernetes.io/projected/e15fa7c1-65ea-4956-a262-841d8a79c49f-kube-api-access-hh75n\") pod \"mtu-prober-zgm9p\" (UID: \"e15fa7c1-65ea-4956-a262-841d8a79c49f\") " pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:15.738271 master-0 kubenswrapper[3962]: I0308 21:56:15.738195 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh75n\" (UniqueName: \"kubernetes.io/projected/e15fa7c1-65ea-4956-a262-841d8a79c49f-kube-api-access-hh75n\") pod \"mtu-prober-zgm9p\" (UID: \"e15fa7c1-65ea-4956-a262-841d8a79c49f\") " pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:15.761216 master-0 kubenswrapper[3962]: I0308 21:56:15.761165 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh75n\" (UniqueName: \"kubernetes.io/projected/e15fa7c1-65ea-4956-a262-841d8a79c49f-kube-api-access-hh75n\") pod \"mtu-prober-zgm9p\" (UID: \"e15fa7c1-65ea-4956-a262-841d8a79c49f\") " pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:15.860438 master-0 kubenswrapper[3962]: I0308 21:56:15.860380 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:15.970701 master-0 kubenswrapper[3962]: I0308 21:56:15.970577 3962 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 17:22:02.974599039 +0000 UTC Mar 08 21:56:15.970701 master-0 kubenswrapper[3962]: I0308 21:56:15.970636 3962 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h25m47.003968083s for next certificate rotation Mar 08 21:56:16.327065 master-0 kubenswrapper[3962]: W0308 21:56:16.327010 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode15fa7c1_65ea_4956_a262_841d8a79c49f.slice/crio-b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d WatchSource:0}: Error finding container b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d: Status 404 returned error can't find the container with id b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d Mar 08 21:56:16.402164 master-0 kubenswrapper[3962]: I0308 21:56:16.402108 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zgm9p" event={"ID":"e15fa7c1-65ea-4956-a262-841d8a79c49f","Type":"ContainerStarted","Data":"b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d"} Mar 08 21:56:16.753778 master-0 kubenswrapper[3962]: I0308 21:56:16.753694 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:16.754644 master-0 kubenswrapper[3962]: E0308 21:56:16.753843 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:16.754644 master-0 kubenswrapper[3962]: E0308 21:56:16.753899 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:24.753880962 +0000 UTC m=+52.387153164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:16.971874 master-0 kubenswrapper[3962]: I0308 21:56:16.971696 3962 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 16:56:52.485483262 +0000 UTC Mar 08 21:56:16.971874 master-0 kubenswrapper[3962]: I0308 21:56:16.971766 3962 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h0m35.513722494s for next certificate rotation Mar 08 21:56:17.408666 master-0 kubenswrapper[3962]: I0308 21:56:17.408517 3962 generic.go:334] "Generic (PLEG): container finished" podID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerID="075540abc9ccd6697e1ff04ade4d337fce9916d26b47b35e3ef665f65e8db6d7" exitCode=0 Mar 08 21:56:17.408666 master-0 kubenswrapper[3962]: I0308 21:56:17.408601 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kxkrl" event={"ID":"0a43561f-bdde-456b-b4a4-2055d4fe6880","Type":"ContainerDied","Data":"075540abc9ccd6697e1ff04ade4d337fce9916d26b47b35e3ef665f65e8db6d7"} Mar 08 21:56:17.412110 master-0 kubenswrapper[3962]: I0308 21:56:17.411977 3962 generic.go:334] "Generic (PLEG): container finished" podID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerID="6fd82c9a243ac415559b6058cdd8b371086e0c724a6c0dd643229ce1967ee982" exitCode=0 Mar 08 21:56:17.412262 master-0 kubenswrapper[3962]: I0308 21:56:17.412118 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zgm9p" event={"ID":"e15fa7c1-65ea-4956-a262-841d8a79c49f","Type":"ContainerDied","Data":"6fd82c9a243ac415559b6058cdd8b371086e0c724a6c0dd643229ce1967ee982"} Mar 08 21:56:18.449766 master-0 kubenswrapper[3962]: I0308 21:56:18.449626 3962 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:18.457165 master-0 kubenswrapper[3962]: I0308 21:56:18.457099 3962 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:18.568224 master-0 kubenswrapper[3962]: I0308 21:56:18.567854 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-resolv-conf\") pod \"0a43561f-bdde-456b-b4a4-2055d4fe6880\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " Mar 08 21:56:18.568224 master-0 kubenswrapper[3962]: I0308 21:56:18.567967 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrprg\" (UniqueName: \"kubernetes.io/projected/0a43561f-bdde-456b-b4a4-2055d4fe6880-kube-api-access-vrprg\") pod \"0a43561f-bdde-456b-b4a4-2055d4fe6880\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " Mar 08 21:56:18.568224 master-0 kubenswrapper[3962]: I0308 21:56:18.568050 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-var-run-resolv-conf\") pod \"0a43561f-bdde-456b-b4a4-2055d4fe6880\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " Mar 08 21:56:18.568224 master-0 kubenswrapper[3962]: I0308 21:56:18.568169 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-ca-bundle\") pod \"0a43561f-bdde-456b-b4a4-2055d4fe6880\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " Mar 08 21:56:18.568224 master-0 kubenswrapper[3962]: I0308 21:56:18.568222 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-sno-bootstrap-files\") pod \"0a43561f-bdde-456b-b4a4-2055d4fe6880\" (UID: \"0a43561f-bdde-456b-b4a4-2055d4fe6880\") " Mar 08 21:56:18.568973 master-0 kubenswrapper[3962]: I0308 21:56:18.568279 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh75n\" (UniqueName: \"kubernetes.io/projected/e15fa7c1-65ea-4956-a262-841d8a79c49f-kube-api-access-hh75n\") pod \"e15fa7c1-65ea-4956-a262-841d8a79c49f\" (UID: \"e15fa7c1-65ea-4956-a262-841d8a79c49f\") " Mar 08 21:56:18.568973 master-0 kubenswrapper[3962]: I0308 21:56:18.568489 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "0a43561f-bdde-456b-b4a4-2055d4fe6880" (UID: "0a43561f-bdde-456b-b4a4-2055d4fe6880"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:56:18.568973 master-0 kubenswrapper[3962]: I0308 21:56:18.568474 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "0a43561f-bdde-456b-b4a4-2055d4fe6880" (UID: "0a43561f-bdde-456b-b4a4-2055d4fe6880"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:56:18.568973 master-0 kubenswrapper[3962]: I0308 21:56:18.568611 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "0a43561f-bdde-456b-b4a4-2055d4fe6880" (UID: "0a43561f-bdde-456b-b4a4-2055d4fe6880"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:56:18.568973 master-0 kubenswrapper[3962]: I0308 21:56:18.568674 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "0a43561f-bdde-456b-b4a4-2055d4fe6880" (UID: "0a43561f-bdde-456b-b4a4-2055d4fe6880"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:56:18.574935 master-0 kubenswrapper[3962]: I0308 21:56:18.574843 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a43561f-bdde-456b-b4a4-2055d4fe6880-kube-api-access-vrprg" (OuterVolumeSpecName: "kube-api-access-vrprg") pod "0a43561f-bdde-456b-b4a4-2055d4fe6880" (UID: "0a43561f-bdde-456b-b4a4-2055d4fe6880"). InnerVolumeSpecName "kube-api-access-vrprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:56:18.575361 master-0 kubenswrapper[3962]: I0308 21:56:18.575298 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15fa7c1-65ea-4956-a262-841d8a79c49f-kube-api-access-hh75n" (OuterVolumeSpecName: "kube-api-access-hh75n") pod "e15fa7c1-65ea-4956-a262-841d8a79c49f" (UID: "e15fa7c1-65ea-4956-a262-841d8a79c49f"). InnerVolumeSpecName "kube-api-access-hh75n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:56:18.669605 master-0 kubenswrapper[3962]: I0308 21:56:18.669269 3962 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 21:56:18.669605 master-0 kubenswrapper[3962]: I0308 21:56:18.669330 3962 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 08 21:56:18.669605 master-0 kubenswrapper[3962]: I0308 21:56:18.669392 3962 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh75n\" (UniqueName: \"kubernetes.io/projected/e15fa7c1-65ea-4956-a262-841d8a79c49f-kube-api-access-hh75n\") on node \"master-0\" DevicePath \"\"" Mar 08 21:56:18.669605 master-0 kubenswrapper[3962]: I0308 21:56:18.669411 3962 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 08 21:56:18.669605 master-0 kubenswrapper[3962]: I0308 21:56:18.669468 3962 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrprg\" (UniqueName: \"kubernetes.io/projected/0a43561f-bdde-456b-b4a4-2055d4fe6880-kube-api-access-vrprg\") on node \"master-0\" DevicePath \"\"" Mar 08 21:56:18.669605 master-0 kubenswrapper[3962]: I0308 21:56:18.669492 3962 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0a43561f-bdde-456b-b4a4-2055d4fe6880-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 08 21:56:19.215384 master-0 kubenswrapper[3962]: I0308 21:56:19.213190 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 08 21:56:19.215384 master-0 kubenswrapper[3962]: I0308 21:56:19.213517 3962 scope.go:117] "RemoveContainer" containerID="f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b" Mar 08 21:56:19.422683 master-0 kubenswrapper[3962]: I0308 21:56:19.421540 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kxkrl" event={"ID":"0a43561f-bdde-456b-b4a4-2055d4fe6880","Type":"ContainerDied","Data":"996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034"} Mar 08 21:56:19.422683 master-0 kubenswrapper[3962]: I0308 21:56:19.421970 3962 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034" Mar 08 21:56:19.422683 master-0 kubenswrapper[3962]: I0308 21:56:19.421585 3962 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:56:19.425201 master-0 kubenswrapper[3962]: I0308 21:56:19.425135 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zgm9p" event={"ID":"e15fa7c1-65ea-4956-a262-841d8a79c49f","Type":"ContainerDied","Data":"b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d"} Mar 08 21:56:19.425201 master-0 kubenswrapper[3962]: I0308 21:56:19.425177 3962 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d" Mar 08 21:56:19.425637 master-0 kubenswrapper[3962]: I0308 21:56:19.425274 3962 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zgm9p" Mar 08 21:56:20.432599 master-0 kubenswrapper[3962]: I0308 21:56:20.432514 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 21:56:20.433570 master-0 kubenswrapper[3962]: I0308 21:56:20.433361 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"141c1c193013aba156bcafd70b058b224242057d2cf9f83ba4dd26b8100e4d3f"} Mar 08 21:56:20.530114 master-0 kubenswrapper[3962]: I0308 21:56:20.529992 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.529967291 podStartE2EDuration="1.529967291s" podCreationTimestamp="2026-03-08 21:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:56:20.453737546 +0000 UTC m=+48.087009778" watchObservedRunningTime="2026-03-08 21:56:20.529967291 +0000 UTC m=+48.163239493" Mar 08 21:56:20.530482 master-0 kubenswrapper[3962]: I0308 21:56:20.530199 3962 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-zgm9p"] Mar 08 21:56:20.536668 master-0 kubenswrapper[3962]: I0308 21:56:20.536594 3962 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-zgm9p"] Mar 08 21:56:21.194453 master-0 kubenswrapper[3962]: I0308 21:56:21.194351 3962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" path="/var/lib/kubelet/pods/e15fa7c1-65ea-4956-a262-841d8a79c49f/volumes" Mar 08 21:56:24.820335 master-0 kubenswrapper[3962]: I0308 21:56:24.820225 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:24.821667 master-0 kubenswrapper[3962]: E0308 21:56:24.820426 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:24.821667 master-0 kubenswrapper[3962]: E0308 21:56:24.820518 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:40.820493019 +0000 UTC m=+68.453765261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:25.427454 master-0 kubenswrapper[3962]: I0308 21:56:25.427300 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-l8ltx"] Mar 08 21:56:25.427454 master-0 kubenswrapper[3962]: E0308 21:56:25.427441 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 21:56:25.427693 master-0 kubenswrapper[3962]: I0308 21:56:25.427466 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 21:56:25.427693 master-0 kubenswrapper[3962]: E0308 21:56:25.427482 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerName="prober" Mar 08 21:56:25.427693 master-0 kubenswrapper[3962]: I0308 21:56:25.427496 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerName="prober" Mar 08 21:56:25.427693 master-0 kubenswrapper[3962]: I0308 21:56:25.427678 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 21:56:25.427844 master-0 kubenswrapper[3962]: I0308 21:56:25.427710 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerName="prober" Mar 08 21:56:25.428184 master-0 kubenswrapper[3962]: I0308 21:56:25.428120 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.430883 master-0 kubenswrapper[3962]: I0308 21:56:25.430842 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 21:56:25.431615 master-0 kubenswrapper[3962]: I0308 21:56:25.431569 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 21:56:25.433691 master-0 kubenswrapper[3962]: I0308 21:56:25.433638 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 21:56:25.436950 master-0 kubenswrapper[3962]: I0308 21:56:25.436868 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 21:56:25.525440 master-0 kubenswrapper[3962]: I0308 21:56:25.525365 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525440 master-0 kubenswrapper[3962]: I0308 21:56:25.525430 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525440 master-0 kubenswrapper[3962]: I0308 21:56:25.525455 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525482 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525554 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525590 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525614 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525683 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525715 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525738 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525760 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.525790 master-0 kubenswrapper[3962]: I0308 21:56:25.525782 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.526155 master-0 kubenswrapper[3962]: I0308 21:56:25.525804 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.526155 master-0 kubenswrapper[3962]: I0308 21:56:25.525830 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.526155 master-0 kubenswrapper[3962]: I0308 21:56:25.525877 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.526155 master-0 kubenswrapper[3962]: I0308 21:56:25.525915 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.526155 master-0 kubenswrapper[3962]: I0308 21:56:25.525946 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.624017 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-74fmb"] Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626333 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626390 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626411 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626440 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626521 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626675 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626769 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.626985 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627036 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627124 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627163 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627205 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627290 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627217 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627303 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627333 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627363 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.628785 master-0 kubenswrapper[3962]: I0308 21:56:25.627385 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627405 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627424 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627460 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627454 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627472 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627506 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627537 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627566 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627600 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627624 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627682 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627570 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.627731 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.628407 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.629056 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.629448 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.631878 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 21:56:25.633145 master-0 kubenswrapper[3962]: I0308 21:56:25.632309 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 21:56:25.657104 master-0 kubenswrapper[3962]: I0308 21:56:25.657016 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.728589 master-0 kubenswrapper[3962]: I0308 21:56:25.728311 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.728589 master-0 kubenswrapper[3962]: I0308 21:56:25.728425 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.728589 master-0 kubenswrapper[3962]: I0308 21:56:25.728473 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.728589 master-0 kubenswrapper[3962]: I0308 21:56:25.728510 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.728589 master-0 kubenswrapper[3962]: I0308 21:56:25.728548 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.729351 master-0 kubenswrapper[3962]: I0308 21:56:25.728686 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.729351 master-0 kubenswrapper[3962]: I0308 21:56:25.728828 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.729351 master-0 kubenswrapper[3962]: I0308 21:56:25.728884 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.754552 master-0 kubenswrapper[3962]: I0308 21:56:25.754502 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l8ltx" Mar 08 21:56:25.776604 master-0 kubenswrapper[3962]: W0308 21:56:25.776465 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod385e69e4_d443_44bb_8ee4_578a1c902c62.slice/crio-3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7 WatchSource:0}: Error finding container 3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7: Status 404 returned error can't find the container with id 3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7 Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829516 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829583 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829621 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829657 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829694 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829794 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.830035 master-0 kubenswrapper[3962]: I0308 21:56:25.829970 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831378 master-0 kubenswrapper[3962]: I0308 21:56:25.830168 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831378 master-0 kubenswrapper[3962]: I0308 21:56:25.830182 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831378 master-0 kubenswrapper[3962]: I0308 21:56:25.830397 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831378 master-0 kubenswrapper[3962]: I0308 21:56:25.830454 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831378 master-0 kubenswrapper[3962]: I0308 21:56:25.830849 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831378 master-0 kubenswrapper[3962]: I0308 21:56:25.830935 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831715 master-0 kubenswrapper[3962]: I0308 21:56:25.831637 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.831785 master-0 kubenswrapper[3962]: I0308 21:56:25.831731 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.861555 master-0 kubenswrapper[3962]: I0308 21:56:25.861468 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.942242 master-0 kubenswrapper[3962]: I0308 21:56:25.942144 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:56:25.958487 master-0 kubenswrapper[3962]: W0308 21:56:25.958420 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a67acb_9cc6_4793_b99a_01479b239d76.slice/crio-be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566 WatchSource:0}: Error finding container be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566: Status 404 returned error can't find the container with id be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566 Mar 08 21:56:26.419424 master-0 kubenswrapper[3962]: I0308 21:56:26.419329 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lqdbv"] Mar 08 21:56:26.420624 master-0 kubenswrapper[3962]: I0308 21:56:26.419896 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:26.420759 master-0 kubenswrapper[3962]: E0308 21:56:26.420585 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:26.452451 master-0 kubenswrapper[3962]: I0308 21:56:26.452355 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l8ltx" event={"ID":"385e69e4-d443-44bb-8ee4-578a1c902c62","Type":"ContainerStarted","Data":"3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7"} Mar 08 21:56:26.453908 master-0 kubenswrapper[3962]: I0308 21:56:26.453844 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerStarted","Data":"be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566"} Mar 08 21:56:26.537550 master-0 kubenswrapper[3962]: I0308 21:56:26.537461 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:26.537550 master-0 kubenswrapper[3962]: I0308 21:56:26.537530 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:26.638263 master-0 kubenswrapper[3962]: I0308 21:56:26.638186 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:26.638263 master-0 kubenswrapper[3962]: I0308 21:56:26.638242 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:26.638611 master-0 kubenswrapper[3962]: E0308 21:56:26.638526 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:26.639002 master-0 kubenswrapper[3962]: E0308 21:56:26.638954 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:27.138636097 +0000 UTC m=+54.771908329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:26.660315 master-0 kubenswrapper[3962]: I0308 21:56:26.660257 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:27.142179 master-0 kubenswrapper[3962]: E0308 21:56:27.142130 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:27.142912 master-0 kubenswrapper[3962]: E0308 21:56:27.142222 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:28.142201154 +0000 UTC m=+55.775473356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:27.142912 master-0 kubenswrapper[3962]: I0308 21:56:27.141941 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:28.151177 master-0 kubenswrapper[3962]: I0308 21:56:28.151113 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:28.151769 master-0 kubenswrapper[3962]: E0308 21:56:28.151336 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:28.151769 master-0 kubenswrapper[3962]: E0308 21:56:28.151427 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:30.151406853 +0000 UTC m=+57.784679045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:28.187281 master-0 kubenswrapper[3962]: I0308 21:56:28.187129 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:28.187346 master-0 kubenswrapper[3962]: E0308 21:56:28.187287 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:28.463661 master-0 kubenswrapper[3962]: I0308 21:56:28.463591 3962 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="1de5c137bbb7c8c06869f9101463a33e4cb94c8693913396854f5dedf16bf314" exitCode=0 Mar 08 21:56:28.463661 master-0 kubenswrapper[3962]: I0308 21:56:28.463646 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"1de5c137bbb7c8c06869f9101463a33e4cb94c8693913396854f5dedf16bf314"} Mar 08 21:56:30.167988 master-0 kubenswrapper[3962]: I0308 21:56:30.167891 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:30.168647 master-0 kubenswrapper[3962]: E0308 21:56:30.168157 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:30.168647 master-0 kubenswrapper[3962]: E0308 21:56:30.168253 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:34.168226708 +0000 UTC m=+61.801498950 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:30.187097 master-0 kubenswrapper[3962]: I0308 21:56:30.186980 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:30.187283 master-0 kubenswrapper[3962]: E0308 21:56:30.187216 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:32.186843 master-0 kubenswrapper[3962]: I0308 21:56:32.186641 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:32.186843 master-0 kubenswrapper[3962]: E0308 21:56:32.186817 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:34.186664 master-0 kubenswrapper[3962]: I0308 21:56:34.186576 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:34.187709 master-0 kubenswrapper[3962]: E0308 21:56:34.186737 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:34.205230 master-0 kubenswrapper[3962]: I0308 21:56:34.205160 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:34.205590 master-0 kubenswrapper[3962]: E0308 21:56:34.205315 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:34.205590 master-0 kubenswrapper[3962]: E0308 21:56:34.205385 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:42.205368683 +0000 UTC m=+69.838640885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:36.187495 master-0 kubenswrapper[3962]: I0308 21:56:36.187386 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:36.188435 master-0 kubenswrapper[3962]: E0308 21:56:36.187536 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:37.821422 master-0 kubenswrapper[3962]: I0308 21:56:37.820989 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm"] Mar 08 21:56:37.822184 master-0 kubenswrapper[3962]: I0308 21:56:37.821531 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:37.824635 master-0 kubenswrapper[3962]: I0308 21:56:37.823837 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 21:56:37.824635 master-0 kubenswrapper[3962]: I0308 21:56:37.824062 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 21:56:37.825982 master-0 kubenswrapper[3962]: I0308 21:56:37.824935 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 21:56:37.825982 master-0 kubenswrapper[3962]: I0308 21:56:37.824998 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 21:56:37.825982 master-0 kubenswrapper[3962]: I0308 21:56:37.825135 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 21:56:37.942435 master-0 kubenswrapper[3962]: I0308 21:56:37.942120 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:37.942435 master-0 kubenswrapper[3962]: I0308 21:56:37.942225 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:37.942435 master-0 kubenswrapper[3962]: I0308 21:56:37.942272 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:37.942435 master-0 kubenswrapper[3962]: I0308 21:56:37.942307 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.025544 master-0 kubenswrapper[3962]: I0308 21:56:38.025383 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m5qr"] Mar 08 21:56:38.027291 master-0 kubenswrapper[3962]: I0308 21:56:38.026819 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.035879 master-0 kubenswrapper[3962]: I0308 21:56:38.035742 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 21:56:38.038847 master-0 kubenswrapper[3962]: I0308 21:56:38.038783 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 21:56:38.043910 master-0 kubenswrapper[3962]: I0308 21:56:38.042858 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.043910 master-0 kubenswrapper[3962]: I0308 21:56:38.042910 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.043910 master-0 kubenswrapper[3962]: I0308 21:56:38.043767 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.043910 master-0 kubenswrapper[3962]: I0308 21:56:38.043844 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.043910 master-0 kubenswrapper[3962]: I0308 21:56:38.043874 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.043910 master-0 kubenswrapper[3962]: I0308 21:56:38.043884 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.049022 master-0 kubenswrapper[3962]: I0308 21:56:38.048659 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.069045 master-0 kubenswrapper[3962]: I0308 21:56:38.068973 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.138641 master-0 kubenswrapper[3962]: I0308 21:56:38.138515 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144449 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-netns\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144499 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-kubelet\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144522 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144543 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-var-lib-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144567 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-ovn\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144588 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3624c541-56bf-4e7e-9460-6069eca194b2-ovn-node-metrics-cert\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144609 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-etc-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.144671 master-0 kubenswrapper[3962]: I0308 21:56:38.144632 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-netd\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144676 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-env-overrides\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144715 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-bin\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144742 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-systemd\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144761 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-script-lib\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144880 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-config\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144963 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-systemd-units\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.144997 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-slash\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.145019 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145086 master-0 kubenswrapper[3962]: I0308 21:56:38.145063 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145403 master-0 kubenswrapper[3962]: I0308 21:56:38.145118 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-log-socket\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145403 master-0 kubenswrapper[3962]: I0308 21:56:38.145141 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96f6\" (UniqueName: \"kubernetes.io/projected/3624c541-56bf-4e7e-9460-6069eca194b2-kube-api-access-n96f6\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.145403 master-0 kubenswrapper[3962]: I0308 21:56:38.145171 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-node-log\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.187728 master-0 kubenswrapper[3962]: I0308 21:56:38.187048 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:38.187728 master-0 kubenswrapper[3962]: E0308 21:56:38.187255 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:38.246284 master-0 kubenswrapper[3962]: I0308 21:56:38.246190 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246497 master-0 kubenswrapper[3962]: I0308 21:56:38.246339 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246497 master-0 kubenswrapper[3962]: I0308 21:56:38.246380 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-log-socket\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246616 master-0 kubenswrapper[3962]: I0308 21:56:38.246521 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-log-socket\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246676 master-0 kubenswrapper[3962]: I0308 21:56:38.246622 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n96f6\" (UniqueName: \"kubernetes.io/projected/3624c541-56bf-4e7e-9460-6069eca194b2-kube-api-access-n96f6\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246742 master-0 kubenswrapper[3962]: I0308 21:56:38.246696 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-node-log\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246871 master-0 kubenswrapper[3962]: I0308 21:56:38.246828 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-node-log\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246944 master-0 kubenswrapper[3962]: I0308 21:56:38.246831 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-netns\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.246944 master-0 kubenswrapper[3962]: I0308 21:56:38.246895 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-netns\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.246986 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-kubelet\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247038 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247088 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-kubelet\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247118 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-var-lib-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247135 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247163 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-ovn\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247177 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-var-lib-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247206 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3624c541-56bf-4e7e-9460-6069eca194b2-ovn-node-metrics-cert\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247242 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-ovn\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247252 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-etc-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247322 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-netd\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247324 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-etc-openvswitch\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247391 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-netd\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247365 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-env-overrides\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247475 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-bin\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247523 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-systemd\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247576 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-script-lib\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.250052 master-0 kubenswrapper[3962]: I0308 21:56:38.247614 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-bin\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247624 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-config\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247671 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-systemd-units\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247710 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-slash\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247726 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-systemd\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247745 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247784 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-systemd-units\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247852 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-slash\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.247959 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.248162 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-env-overrides\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.248815 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-script-lib\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.251390 master-0 kubenswrapper[3962]: I0308 21:56:38.249131 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-config\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.252779 master-0 kubenswrapper[3962]: I0308 21:56:38.252733 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3624c541-56bf-4e7e-9460-6069eca194b2-ovn-node-metrics-cert\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.264243 master-0 kubenswrapper[3962]: I0308 21:56:38.264196 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n96f6\" (UniqueName: \"kubernetes.io/projected/3624c541-56bf-4e7e-9460-6069eca194b2-kube-api-access-n96f6\") pod \"ovnkube-node-5m5qr\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.370145 master-0 kubenswrapper[3962]: I0308 21:56:38.369992 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:56:38.460694 master-0 kubenswrapper[3962]: W0308 21:56:38.460644 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081acedd_4c88_461f_80f3_e80fdbadb725.slice/crio-6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b WatchSource:0}: Error finding container 6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b: Status 404 returned error can't find the container with id 6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b Mar 08 21:56:38.490672 master-0 kubenswrapper[3962]: I0308 21:56:38.490613 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"63b660af69f2087dc4c60773633358d5b6c0baf9d89578945f2e2d8011d5c68e"} Mar 08 21:56:38.492277 master-0 kubenswrapper[3962]: I0308 21:56:38.492257 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b"} Mar 08 21:56:39.497415 master-0 kubenswrapper[3962]: I0308 21:56:39.497354 3962 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="8100187bff84fd39b1869b62c92c77062e916e1f9e3462572f5572d1caef3b83" exitCode=0 Mar 08 21:56:39.498049 master-0 kubenswrapper[3962]: I0308 21:56:39.497439 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"8100187bff84fd39b1869b62c92c77062e916e1f9e3462572f5572d1caef3b83"} Mar 08 21:56:39.499526 master-0 kubenswrapper[3962]: I0308 21:56:39.499493 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"9383b71d5d3cd947ccf24cbb393c63b89674ed85bec2d2f62c05a8b0707848a8"} Mar 08 21:56:39.502516 master-0 kubenswrapper[3962]: I0308 21:56:39.502457 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l8ltx" event={"ID":"385e69e4-d443-44bb-8ee4-578a1c902c62","Type":"ContainerStarted","Data":"c4dbb259e0e16bae260c7aeab514c3bce22a0a1df01d7fb94250b416bfcd06a0"} Mar 08 21:56:40.187026 master-0 kubenswrapper[3962]: I0308 21:56:40.186940 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:40.187283 master-0 kubenswrapper[3962]: E0308 21:56:40.187186 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:40.869956 master-0 kubenswrapper[3962]: I0308 21:56:40.869869 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:56:40.870636 master-0 kubenswrapper[3962]: E0308 21:56:40.870145 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:40.870636 master-0 kubenswrapper[3962]: E0308 21:56:40.870329 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:12.870306716 +0000 UTC m=+100.503578918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:56:41.002965 master-0 kubenswrapper[3962]: I0308 21:56:41.002859 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-l8ltx" podStartSLOduration=3.2224072870000002 podStartE2EDuration="16.002802858s" podCreationTimestamp="2026-03-08 21:56:25 +0000 UTC" firstStartedPulling="2026-03-08 21:56:25.781890138 +0000 UTC m=+53.415162380" lastFinishedPulling="2026-03-08 21:56:38.562285699 +0000 UTC m=+66.195557951" observedRunningTime="2026-03-08 21:56:39.532936559 +0000 UTC m=+67.166208761" watchObservedRunningTime="2026-03-08 21:56:41.002802858 +0000 UTC m=+68.636075070" Mar 08 21:56:41.007588 master-0 kubenswrapper[3962]: I0308 21:56:41.003439 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-djlff"] Mar 08 21:56:41.007588 master-0 kubenswrapper[3962]: I0308 21:56:41.003808 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:41.007588 master-0 kubenswrapper[3962]: E0308 21:56:41.003872 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:41.072958 master-0 kubenswrapper[3962]: I0308 21:56:41.072894 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:41.174053 master-0 kubenswrapper[3962]: I0308 21:56:41.173831 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:41.198949 master-0 kubenswrapper[3962]: E0308 21:56:41.198879 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:56:41.198949 master-0 kubenswrapper[3962]: E0308 21:56:41.198920 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:56:41.198949 master-0 kubenswrapper[3962]: E0308 21:56:41.198940 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:41.199390 master-0 kubenswrapper[3962]: E0308 21:56:41.199021 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:56:41.698996817 +0000 UTC m=+69.332269029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:41.780050 master-0 kubenswrapper[3962]: I0308 21:56:41.778953 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:41.780050 master-0 kubenswrapper[3962]: E0308 21:56:41.779182 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:56:41.780050 master-0 kubenswrapper[3962]: E0308 21:56:41.779201 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:56:41.780050 master-0 kubenswrapper[3962]: E0308 21:56:41.779213 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:41.780050 master-0 kubenswrapper[3962]: E0308 21:56:41.779268 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:56:42.779250192 +0000 UTC m=+70.412522394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:42.186506 master-0 kubenswrapper[3962]: I0308 21:56:42.186436 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:42.187187 master-0 kubenswrapper[3962]: E0308 21:56:42.186694 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:42.283556 master-0 kubenswrapper[3962]: I0308 21:56:42.283470 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:42.283794 master-0 kubenswrapper[3962]: E0308 21:56:42.283688 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:42.283876 master-0 kubenswrapper[3962]: E0308 21:56:42.283827 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:58.283797408 +0000 UTC m=+85.917069610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:42.514789 master-0 kubenswrapper[3962]: I0308 21:56:42.514648 3962 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="332b44c02955cc191872da4d797a1cc566a290dcc3b5e3b8b9e49f2a86f283e8" exitCode=0 Mar 08 21:56:42.514965 master-0 kubenswrapper[3962]: I0308 21:56:42.514741 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"332b44c02955cc191872da4d797a1cc566a290dcc3b5e3b8b9e49f2a86f283e8"} Mar 08 21:56:42.788432 master-0 kubenswrapper[3962]: I0308 21:56:42.788233 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:42.788624 master-0 kubenswrapper[3962]: E0308 21:56:42.788509 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:56:42.788624 master-0 kubenswrapper[3962]: E0308 21:56:42.788529 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:56:42.788624 master-0 kubenswrapper[3962]: E0308 21:56:42.788542 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:42.788624 master-0 kubenswrapper[3962]: E0308 21:56:42.788602 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:56:44.788583658 +0000 UTC m=+72.421855860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:43.187754 master-0 kubenswrapper[3962]: I0308 21:56:43.187157 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:43.187754 master-0 kubenswrapper[3962]: E0308 21:56:43.187672 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:43.613505 master-0 kubenswrapper[3962]: I0308 21:56:43.612494 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-trhtl"] Mar 08 21:56:43.613505 master-0 kubenswrapper[3962]: I0308 21:56:43.613006 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.616628 master-0 kubenswrapper[3962]: I0308 21:56:43.616564 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 21:56:43.616703 master-0 kubenswrapper[3962]: I0308 21:56:43.616637 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 21:56:43.616950 master-0 kubenswrapper[3962]: I0308 21:56:43.616914 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 21:56:43.617060 master-0 kubenswrapper[3962]: I0308 21:56:43.617044 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 21:56:43.617184 master-0 kubenswrapper[3962]: I0308 21:56:43.617161 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 21:56:43.695661 master-0 kubenswrapper[3962]: I0308 21:56:43.695525 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.695661 master-0 kubenswrapper[3962]: I0308 21:56:43.695615 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.695661 master-0 kubenswrapper[3962]: I0308 21:56:43.695670 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.695972 master-0 kubenswrapper[3962]: I0308 21:56:43.695689 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.802177 master-0 kubenswrapper[3962]: I0308 21:56:43.801694 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.802177 master-0 kubenswrapper[3962]: I0308 21:56:43.801758 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.802177 master-0 kubenswrapper[3962]: I0308 21:56:43.801791 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.802177 master-0 kubenswrapper[3962]: I0308 21:56:43.801818 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.809101 master-0 kubenswrapper[3962]: I0308 21:56:43.803247 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.809101 master-0 kubenswrapper[3962]: E0308 21:56:43.803367 3962 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 08 21:56:43.809101 master-0 kubenswrapper[3962]: E0308 21:56:43.803750 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert podName:dfe625a1-5ba4-491f-9ab3-5d91154961a0 nodeName:}" failed. No retries permitted until 2026-03-08 21:56:44.303727392 +0000 UTC m=+71.936999594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert") pod "network-node-identity-trhtl" (UID: "dfe625a1-5ba4-491f-9ab3-5d91154961a0") : secret "network-node-identity-cert" not found Mar 08 21:56:43.809101 master-0 kubenswrapper[3962]: I0308 21:56:43.804741 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:43.845475 master-0 kubenswrapper[3962]: I0308 21:56:43.845413 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:44.188265 master-0 kubenswrapper[3962]: I0308 21:56:44.186560 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:44.188265 master-0 kubenswrapper[3962]: E0308 21:56:44.186689 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:44.306593 master-0 kubenswrapper[3962]: I0308 21:56:44.306522 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:44.311977 master-0 kubenswrapper[3962]: I0308 21:56:44.311921 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:44.525490 master-0 kubenswrapper[3962]: I0308 21:56:44.525290 3962 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="719c0f1133120f686febe97b7386aa26236fdb7648305df23056b3e40ec22875" exitCode=0 Mar 08 21:56:44.525490 master-0 kubenswrapper[3962]: I0308 21:56:44.525355 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"719c0f1133120f686febe97b7386aa26236fdb7648305df23056b3e40ec22875"} Mar 08 21:56:44.539343 master-0 kubenswrapper[3962]: I0308 21:56:44.539052 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:56:44.565539 master-0 kubenswrapper[3962]: W0308 21:56:44.565474 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe625a1_5ba4_491f_9ab3_5d91154961a0.slice/crio-600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2 WatchSource:0}: Error finding container 600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2: Status 404 returned error can't find the container with id 600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2 Mar 08 21:56:44.813498 master-0 kubenswrapper[3962]: I0308 21:56:44.813358 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:44.813736 master-0 kubenswrapper[3962]: E0308 21:56:44.813523 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:56:44.813736 master-0 kubenswrapper[3962]: E0308 21:56:44.813542 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:56:44.813736 master-0 kubenswrapper[3962]: E0308 21:56:44.813556 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:44.813736 master-0 kubenswrapper[3962]: E0308 21:56:44.813609 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:56:48.813595258 +0000 UTC m=+76.446867460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:45.188255 master-0 kubenswrapper[3962]: I0308 21:56:45.187415 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:45.188255 master-0 kubenswrapper[3962]: E0308 21:56:45.187678 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:45.540242 master-0 kubenswrapper[3962]: I0308 21:56:45.540095 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2"} Mar 08 21:56:46.187192 master-0 kubenswrapper[3962]: I0308 21:56:46.187136 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:46.187517 master-0 kubenswrapper[3962]: E0308 21:56:46.187311 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:47.187447 master-0 kubenswrapper[3962]: I0308 21:56:47.187381 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:47.188149 master-0 kubenswrapper[3962]: E0308 21:56:47.187528 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:48.186990 master-0 kubenswrapper[3962]: I0308 21:56:48.186934 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:48.191967 master-0 kubenswrapper[3962]: E0308 21:56:48.191930 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:48.879283 master-0 kubenswrapper[3962]: I0308 21:56:48.878706 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:48.879283 master-0 kubenswrapper[3962]: E0308 21:56:48.878923 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:56:48.879283 master-0 kubenswrapper[3962]: E0308 21:56:48.878942 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:56:48.879283 master-0 kubenswrapper[3962]: E0308 21:56:48.878955 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:48.879283 master-0 kubenswrapper[3962]: E0308 21:56:48.879016 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:56:56.87899053 +0000 UTC m=+84.512262732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:49.188601 master-0 kubenswrapper[3962]: I0308 21:56:49.186506 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:49.188601 master-0 kubenswrapper[3962]: E0308 21:56:49.186704 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:50.187233 master-0 kubenswrapper[3962]: I0308 21:56:50.186449 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:50.187233 master-0 kubenswrapper[3962]: E0308 21:56:50.186660 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:51.187569 master-0 kubenswrapper[3962]: I0308 21:56:51.187472 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:51.188337 master-0 kubenswrapper[3962]: E0308 21:56:51.187642 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:52.187559 master-0 kubenswrapper[3962]: I0308 21:56:52.187464 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:52.187882 master-0 kubenswrapper[3962]: E0308 21:56:52.187824 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:53.187292 master-0 kubenswrapper[3962]: I0308 21:56:53.187249 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:53.188382 master-0 kubenswrapper[3962]: E0308 21:56:53.188308 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:54.187343 master-0 kubenswrapper[3962]: I0308 21:56:54.187256 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:54.187711 master-0 kubenswrapper[3962]: E0308 21:56:54.187456 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:55.187247 master-0 kubenswrapper[3962]: I0308 21:56:55.187185 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:55.188094 master-0 kubenswrapper[3962]: E0308 21:56:55.187359 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:56.187164 master-0 kubenswrapper[3962]: I0308 21:56:56.187064 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:56.187477 master-0 kubenswrapper[3962]: E0308 21:56:56.187288 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:56.949135 master-0 kubenswrapper[3962]: I0308 21:56:56.948927 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:56.956560 master-0 kubenswrapper[3962]: E0308 21:56:56.949211 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:56:56.956560 master-0 kubenswrapper[3962]: E0308 21:56:56.949268 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:56:56.956560 master-0 kubenswrapper[3962]: E0308 21:56:56.949289 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:56.956560 master-0 kubenswrapper[3962]: E0308 21:56:56.949565 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:57:12.949357252 +0000 UTC m=+100.582629494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:56:57.187834 master-0 kubenswrapper[3962]: I0308 21:56:57.187744 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:57.189354 master-0 kubenswrapper[3962]: E0308 21:56:57.187957 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:57.576154 master-0 kubenswrapper[3962]: I0308 21:56:57.576031 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} Mar 08 21:56:57.578964 master-0 kubenswrapper[3962]: I0308 21:56:57.578739 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"aaa76f728d77c2984e519842ceb28a5273072cbb92bc05bafd70d63dc2b5a869"} Mar 08 21:56:57.588757 master-0 kubenswrapper[3962]: I0308 21:56:57.588688 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerStarted","Data":"ba570d5274abc3eff808a6feca603573aedab7307cfb102965df1c84daee657a"} Mar 08 21:56:57.607765 master-0 kubenswrapper[3962]: I0308 21:56:57.603026 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" podStartSLOduration=1.881656094 podStartE2EDuration="20.603001879s" podCreationTimestamp="2026-03-08 21:56:37 +0000 UTC" firstStartedPulling="2026-03-08 21:56:38.69062662 +0000 UTC m=+66.323898822" lastFinishedPulling="2026-03-08 21:56:57.411972395 +0000 UTC m=+85.045244607" observedRunningTime="2026-03-08 21:56:57.60224604 +0000 UTC m=+85.235518302" watchObservedRunningTime="2026-03-08 21:56:57.603001879 +0000 UTC m=+85.236274071" Mar 08 21:56:58.187561 master-0 kubenswrapper[3962]: I0308 21:56:58.187450 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:58.187937 master-0 kubenswrapper[3962]: E0308 21:56:58.187745 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:56:58.368693 master-0 kubenswrapper[3962]: I0308 21:56:58.368603 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:56:58.369007 master-0 kubenswrapper[3962]: E0308 21:56:58.368880 3962 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:58.369111 master-0 kubenswrapper[3962]: E0308 21:56:58.369045 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:30.369004348 +0000 UTC m=+118.002276590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 21:56:58.594844 master-0 kubenswrapper[3962]: I0308 21:56:58.594743 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"73a8f9d32fb6d4973561166a1225ead4683b3110d97d82f0bed60b3b5a68361b"} Mar 08 21:56:58.594844 master-0 kubenswrapper[3962]: I0308 21:56:58.594829 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"d0965a7df17209c3214572f918df6f641eebcced99935a1fa23fd422d4732080"} Mar 08 21:56:58.599231 master-0 kubenswrapper[3962]: I0308 21:56:58.599146 3962 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="ba570d5274abc3eff808a6feca603573aedab7307cfb102965df1c84daee657a" exitCode=0 Mar 08 21:56:58.599429 master-0 kubenswrapper[3962]: I0308 21:56:58.599281 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"ba570d5274abc3eff808a6feca603573aedab7307cfb102965df1c84daee657a"} Mar 08 21:56:58.601673 master-0 kubenswrapper[3962]: I0308 21:56:58.601608 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" exitCode=0 Mar 08 21:56:58.601673 master-0 kubenswrapper[3962]: I0308 21:56:58.601653 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} Mar 08 21:56:58.625978 master-0 kubenswrapper[3962]: I0308 21:56:58.625889 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-trhtl" podStartSLOduration=2.762925692 podStartE2EDuration="15.625856434s" podCreationTimestamp="2026-03-08 21:56:43 +0000 UTC" firstStartedPulling="2026-03-08 21:56:44.570462099 +0000 UTC m=+72.203734311" lastFinishedPulling="2026-03-08 21:56:57.433392821 +0000 UTC m=+85.066665053" observedRunningTime="2026-03-08 21:56:58.62568947 +0000 UTC m=+86.258961692" watchObservedRunningTime="2026-03-08 21:56:58.625856434 +0000 UTC m=+86.259128666" Mar 08 21:56:59.190559 master-0 kubenswrapper[3962]: I0308 21:56:59.187892 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:56:59.190559 master-0 kubenswrapper[3962]: E0308 21:56:59.188609 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:56:59.612301 master-0 kubenswrapper[3962]: I0308 21:56:59.612166 3962 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="db4187056969875e15e546fde8b086c9df68d0dfd1ba3b2a7d33cdf8f2598f9a" exitCode=0 Mar 08 21:56:59.612687 master-0 kubenswrapper[3962]: I0308 21:56:59.612345 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"db4187056969875e15e546fde8b086c9df68d0dfd1ba3b2a7d33cdf8f2598f9a"} Mar 08 21:56:59.622979 master-0 kubenswrapper[3962]: I0308 21:56:59.622848 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} Mar 08 21:56:59.622979 master-0 kubenswrapper[3962]: I0308 21:56:59.622967 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} Mar 08 21:56:59.623252 master-0 kubenswrapper[3962]: I0308 21:56:59.622991 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} Mar 08 21:56:59.623252 master-0 kubenswrapper[3962]: I0308 21:56:59.623016 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} Mar 08 21:56:59.623252 master-0 kubenswrapper[3962]: I0308 21:56:59.623034 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} Mar 08 21:56:59.623252 master-0 kubenswrapper[3962]: I0308 21:56:59.623051 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} Mar 08 21:57:00.187063 master-0 kubenswrapper[3962]: I0308 21:57:00.186792 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:00.188219 master-0 kubenswrapper[3962]: E0308 21:57:00.188061 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:00.201031 master-0 kubenswrapper[3962]: I0308 21:57:00.200958 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 21:57:00.635186 master-0 kubenswrapper[3962]: I0308 21:57:00.635105 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerStarted","Data":"64584e728966a4dc7f37960670b69b7def067398cf4f7ec06561a12640ec5ee2"} Mar 08 21:57:00.670579 master-0 kubenswrapper[3962]: I0308 21:57:00.670369 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-74fmb" podStartSLOduration=4.285414201 podStartE2EDuration="35.67032397s" podCreationTimestamp="2026-03-08 21:56:25 +0000 UTC" firstStartedPulling="2026-03-08 21:56:25.962151339 +0000 UTC m=+53.595423581" lastFinishedPulling="2026-03-08 21:56:57.347061138 +0000 UTC m=+84.980333350" observedRunningTime="2026-03-08 21:57:00.667250861 +0000 UTC m=+88.300523113" watchObservedRunningTime="2026-03-08 21:57:00.67032397 +0000 UTC m=+88.303596212" Mar 08 21:57:00.685864 master-0 kubenswrapper[3962]: I0308 21:57:00.685752 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.68571471 podStartE2EDuration="685.71471ms" podCreationTimestamp="2026-03-08 21:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:00.684798486 +0000 UTC m=+88.318070738" watchObservedRunningTime="2026-03-08 21:57:00.68571471 +0000 UTC m=+88.318986992" Mar 08 21:57:01.188509 master-0 kubenswrapper[3962]: I0308 21:57:01.188399 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:01.188918 master-0 kubenswrapper[3962]: E0308 21:57:01.188716 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:01.646656 master-0 kubenswrapper[3962]: I0308 21:57:01.646572 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} Mar 08 21:57:02.187484 master-0 kubenswrapper[3962]: I0308 21:57:02.187372 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:02.187808 master-0 kubenswrapper[3962]: E0308 21:57:02.187580 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:02.205595 master-0 kubenswrapper[3962]: I0308 21:57:02.205506 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 21:57:03.186860 master-0 kubenswrapper[3962]: I0308 21:57:03.186788 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:03.189178 master-0 kubenswrapper[3962]: E0308 21:57:03.189112 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:03.265484 master-0 kubenswrapper[3962]: W0308 21:57:03.265385 3962 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 08 21:57:03.275997 master-0 kubenswrapper[3962]: I0308 21:57:03.275876 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.275843709 podStartE2EDuration="1.275843709s" podCreationTimestamp="2026-03-08 21:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:03.266637659 +0000 UTC m=+90.899909901" watchObservedRunningTime="2026-03-08 21:57:03.275843709 +0000 UTC m=+90.909115951" Mar 08 21:57:03.277448 master-0 kubenswrapper[3962]: I0308 21:57:03.277348 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 21:57:04.186698 master-0 kubenswrapper[3962]: I0308 21:57:04.186488 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:04.187005 master-0 kubenswrapper[3962]: E0308 21:57:04.186719 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:04.678504 master-0 kubenswrapper[3962]: I0308 21:57:04.678114 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerStarted","Data":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} Mar 08 21:57:04.678822 master-0 kubenswrapper[3962]: I0308 21:57:04.678610 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:04.678982 master-0 kubenswrapper[3962]: I0308 21:57:04.678916 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:04.679114 master-0 kubenswrapper[3962]: I0308 21:57:04.679023 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:04.708188 master-0 kubenswrapper[3962]: I0308 21:57:04.708109 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:04.715964 master-0 kubenswrapper[3962]: I0308 21:57:04.715878 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:04.834679 master-0 kubenswrapper[3962]: I0308 21:57:04.834550 3962 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m5qr"] Mar 08 21:57:05.082000 master-0 kubenswrapper[3962]: I0308 21:57:05.081907 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.081888269 podStartE2EDuration="2.081888269s" podCreationTimestamp="2026-03-08 21:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:05.08154964 +0000 UTC m=+92.714821882" watchObservedRunningTime="2026-03-08 21:57:05.081888269 +0000 UTC m=+92.715160461" Mar 08 21:57:05.187599 master-0 kubenswrapper[3962]: I0308 21:57:05.187493 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:05.189031 master-0 kubenswrapper[3962]: E0308 21:57:05.187651 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:05.354182 master-0 kubenswrapper[3962]: I0308 21:57:05.353954 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podStartSLOduration=8.371126037 podStartE2EDuration="27.353919129s" podCreationTimestamp="2026-03-08 21:56:38 +0000 UTC" firstStartedPulling="2026-03-08 21:56:38.471945464 +0000 UTC m=+66.105217706" lastFinishedPulling="2026-03-08 21:56:57.454738596 +0000 UTC m=+85.088010798" observedRunningTime="2026-03-08 21:57:05.353473587 +0000 UTC m=+92.986745849" watchObservedRunningTime="2026-03-08 21:57:05.353919129 +0000 UTC m=+92.987191361" Mar 08 21:57:06.186655 master-0 kubenswrapper[3962]: I0308 21:57:06.186556 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:06.186916 master-0 kubenswrapper[3962]: E0308 21:57:06.186800 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:06.687705 master-0 kubenswrapper[3962]: I0308 21:57:06.687606 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-controller" containerID="cri-o://08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" gracePeriod=30 Mar 08 21:57:06.688405 master-0 kubenswrapper[3962]: I0308 21:57:06.687658 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="nbdb" containerID="cri-o://88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" gracePeriod=30 Mar 08 21:57:06.688405 master-0 kubenswrapper[3962]: I0308 21:57:06.687781 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" gracePeriod=30 Mar 08 21:57:06.688405 master-0 kubenswrapper[3962]: I0308 21:57:06.687851 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-acl-logging" containerID="cri-o://debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" gracePeriod=30 Mar 08 21:57:06.688405 master-0 kubenswrapper[3962]: I0308 21:57:06.687869 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="northd" containerID="cri-o://afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" gracePeriod=30 Mar 08 21:57:06.688405 master-0 kubenswrapper[3962]: I0308 21:57:06.687853 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-node" containerID="cri-o://1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" gracePeriod=30 Mar 08 21:57:06.688405 master-0 kubenswrapper[3962]: I0308 21:57:06.687980 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="sbdb" containerID="cri-o://b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" gracePeriod=30 Mar 08 21:57:06.720482 master-0 kubenswrapper[3962]: I0308 21:57:06.720406 3962 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovnkube-controller" containerID="cri-o://489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" gracePeriod=30 Mar 08 21:57:06.981852 master-0 kubenswrapper[3962]: I0308 21:57:06.981779 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/ovnkube-controller/0.log" Mar 08 21:57:06.984664 master-0 kubenswrapper[3962]: I0308 21:57:06.984604 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/kube-rbac-proxy-ovn-metrics/0.log" Mar 08 21:57:06.985353 master-0 kubenswrapper[3962]: I0308 21:57:06.985307 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/kube-rbac-proxy-node/0.log" Mar 08 21:57:06.986060 master-0 kubenswrapper[3962]: I0308 21:57:06.985996 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/ovn-acl-logging/0.log" Mar 08 21:57:06.987153 master-0 kubenswrapper[3962]: I0308 21:57:06.987048 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/ovn-controller/0.log" Mar 08 21:57:06.988018 master-0 kubenswrapper[3962]: I0308 21:57:06.987765 3962 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:07.052893 master-0 kubenswrapper[3962]: I0308 21:57:07.052778 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g4d2r"] Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053017 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="northd" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053041 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="northd" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053053 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="sbdb" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053062 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="sbdb" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053112 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-controller" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053121 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-controller" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053131 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kubecfg-setup" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053139 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kubecfg-setup" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053147 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-node" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053156 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-node" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053195 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-acl-logging" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053204 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-acl-logging" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053213 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053222 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053230 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="nbdb" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053237 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="nbdb" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: E0308 21:57:07.053274 3962 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovnkube-controller" Mar 08 21:57:07.053272 master-0 kubenswrapper[3962]: I0308 21:57:07.053284 3962 state_mem.go:107] "Deleted CPUSet assignment" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovnkube-controller" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053361 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-controller" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053374 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-node" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053384 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="nbdb" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053393 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovn-acl-logging" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053401 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="northd" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053437 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053448 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="sbdb" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.053456 3962 memory_manager.go:354] "RemoveStaleState removing state" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" containerName="ovnkube-controller" Mar 08 21:57:07.054844 master-0 kubenswrapper[3962]: I0308 21:57:07.054772 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.072260 master-0 kubenswrapper[3962]: I0308 21:57:07.072139 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-config\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072260 master-0 kubenswrapper[3962]: I0308 21:57:07.072240 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-ovn\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072303 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-slash\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072386 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-kubelet\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072428 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-openvswitch\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072480 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n96f6\" (UniqueName: \"kubernetes.io/projected/3624c541-56bf-4e7e-9460-6069eca194b2-kube-api-access-n96f6\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072538 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3624c541-56bf-4e7e-9460-6069eca194b2-ovn-node-metrics-cert\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072592 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-bin\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.072699 master-0 kubenswrapper[3962]: I0308 21:57:07.072639 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-etc-openvswitch\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.072741 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-systemd-units\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.072798 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.072838 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-netd\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.072877 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-env-overrides\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.072911 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-log-socket\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.072949 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-node-log\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.073000 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-script-lib\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.073035 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-systemd\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.073068 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-ovn-kubernetes\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.073132 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-netns\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.073340 master-0 kubenswrapper[3962]: I0308 21:57:07.073163 3962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-var-lib-openvswitch\") pod \"3624c541-56bf-4e7e-9460-6069eca194b2\" (UID: \"3624c541-56bf-4e7e-9460-6069eca194b2\") " Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072455 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072566 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072567 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-slash" (OuterVolumeSpecName: "host-slash") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072586 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072668 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072721 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072794 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.073841 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072832 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072874 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072922 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.072995 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-log-socket" (OuterVolumeSpecName: "log-socket") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.073053 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-node-log" (OuterVolumeSpecName: "node-log") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.073405 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.074565 master-0 kubenswrapper[3962]: I0308 21:57:07.073442 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.075999 master-0 kubenswrapper[3962]: I0308 21:57:07.073442 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:07.075999 master-0 kubenswrapper[3962]: I0308 21:57:07.073472 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.079359 master-0 kubenswrapper[3962]: I0308 21:57:07.079269 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3624c541-56bf-4e7e-9460-6069eca194b2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:07.080365 master-0 kubenswrapper[3962]: I0308 21:57:07.080303 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3624c541-56bf-4e7e-9460-6069eca194b2-kube-api-access-n96f6" (OuterVolumeSpecName: "kube-api-access-n96f6") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "kube-api-access-n96f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:07.086717 master-0 kubenswrapper[3962]: I0308 21:57:07.086650 3962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3624c541-56bf-4e7e-9460-6069eca194b2" (UID: "3624c541-56bf-4e7e-9460-6069eca194b2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:07.174287 master-0 kubenswrapper[3962]: I0308 21:57:07.174133 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174287 master-0 kubenswrapper[3962]: I0308 21:57:07.174243 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174287 master-0 kubenswrapper[3962]: I0308 21:57:07.174279 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174287 master-0 kubenswrapper[3962]: I0308 21:57:07.174309 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174287 master-0 kubenswrapper[3962]: I0308 21:57:07.174353 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174858 master-0 kubenswrapper[3962]: I0308 21:57:07.174418 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174858 master-0 kubenswrapper[3962]: I0308 21:57:07.174592 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174858 master-0 kubenswrapper[3962]: I0308 21:57:07.174739 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174858 master-0 kubenswrapper[3962]: I0308 21:57:07.174785 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.174858 master-0 kubenswrapper[3962]: I0308 21:57:07.174859 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175194 master-0 kubenswrapper[3962]: I0308 21:57:07.174898 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175194 master-0 kubenswrapper[3962]: I0308 21:57:07.174958 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175194 master-0 kubenswrapper[3962]: I0308 21:57:07.174994 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175194 master-0 kubenswrapper[3962]: I0308 21:57:07.175027 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175424 master-0 kubenswrapper[3962]: I0308 21:57:07.175200 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175424 master-0 kubenswrapper[3962]: I0308 21:57:07.175262 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175545 master-0 kubenswrapper[3962]: I0308 21:57:07.175464 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175545 master-0 kubenswrapper[3962]: I0308 21:57:07.175511 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175545 master-0 kubenswrapper[3962]: I0308 21:57:07.175542 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175707 master-0 kubenswrapper[3962]: I0308 21:57:07.175580 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.175707 master-0 kubenswrapper[3962]: I0308 21:57:07.175669 3962 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175707 master-0 kubenswrapper[3962]: I0308 21:57:07.175690 3962 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175711 3962 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175732 3962 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175750 3962 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175767 3962 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175787 3962 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-node-log\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175807 3962 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175825 3962 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175843 3962 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175861 3962 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175878 3962 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.175880 master-0 kubenswrapper[3962]: I0308 21:57:07.175897 3962 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3624c541-56bf-4e7e-9460-6069eca194b2-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.175915 3962 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.175934 3962 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.175951 3962 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.175967 3962 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.175985 3962 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n96f6\" (UniqueName: \"kubernetes.io/projected/3624c541-56bf-4e7e-9460-6069eca194b2-kube-api-access-n96f6\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.176002 3962 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3624c541-56bf-4e7e-9460-6069eca194b2-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.176529 master-0 kubenswrapper[3962]: I0308 21:57:07.176019 3962 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3624c541-56bf-4e7e-9460-6069eca194b2-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:07.187568 master-0 kubenswrapper[3962]: I0308 21:57:07.187501 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:07.187737 master-0 kubenswrapper[3962]: E0308 21:57:07.187700 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:07.277384 master-0 kubenswrapper[3962]: I0308 21:57:07.277279 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.277384 master-0 kubenswrapper[3962]: I0308 21:57:07.277373 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.277791 master-0 kubenswrapper[3962]: I0308 21:57:07.277514 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.277859 master-0 kubenswrapper[3962]: I0308 21:57:07.277799 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.277922 master-0 kubenswrapper[3962]: I0308 21:57:07.277821 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.277985 master-0 kubenswrapper[3962]: I0308 21:57:07.277942 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.277985 master-0 kubenswrapper[3962]: I0308 21:57:07.277952 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278192 master-0 kubenswrapper[3962]: I0308 21:57:07.277987 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278192 master-0 kubenswrapper[3962]: I0308 21:57:07.278021 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278192 master-0 kubenswrapper[3962]: I0308 21:57:07.278040 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278192 master-0 kubenswrapper[3962]: I0308 21:57:07.278136 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278427 master-0 kubenswrapper[3962]: I0308 21:57:07.278197 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278427 master-0 kubenswrapper[3962]: I0308 21:57:07.278222 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278427 master-0 kubenswrapper[3962]: I0308 21:57:07.278315 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278427 master-0 kubenswrapper[3962]: I0308 21:57:07.278369 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278427 master-0 kubenswrapper[3962]: I0308 21:57:07.278426 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278473 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278508 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278553 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278581 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278603 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278651 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.278709 master-0 kubenswrapper[3962]: I0308 21:57:07.278679 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.278722 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.278746 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.278798 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.278894 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.278926 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.278972 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.279003 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279127 master-0 kubenswrapper[3962]: I0308 21:57:07.279125 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279573 master-0 kubenswrapper[3962]: I0308 21:57:07.279294 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279573 master-0 kubenswrapper[3962]: I0308 21:57:07.279361 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279573 master-0 kubenswrapper[3962]: I0308 21:57:07.279414 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.279573 master-0 kubenswrapper[3962]: I0308 21:57:07.279461 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.280146 master-0 kubenswrapper[3962]: I0308 21:57:07.280059 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.280242 master-0 kubenswrapper[3962]: I0308 21:57:07.280193 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.281465 master-0 kubenswrapper[3962]: I0308 21:57:07.281406 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.285433 master-0 kubenswrapper[3962]: I0308 21:57:07.285369 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.300787 master-0 kubenswrapper[3962]: I0308 21:57:07.300705 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.380286 master-0 kubenswrapper[3962]: I0308 21:57:07.380176 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:07.400636 master-0 kubenswrapper[3962]: W0308 21:57:07.400546 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1232f59f_4e6a_46ef_8bec_1bd4e04956ef.slice/crio-203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d WatchSource:0}: Error finding container 203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d: Status 404 returned error can't find the container with id 203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d Mar 08 21:57:07.693335 master-0 kubenswrapper[3962]: I0308 21:57:07.693284 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/ovnkube-controller/0.log" Mar 08 21:57:07.695527 master-0 kubenswrapper[3962]: I0308 21:57:07.695501 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/kube-rbac-proxy-ovn-metrics/0.log" Mar 08 21:57:07.696333 master-0 kubenswrapper[3962]: I0308 21:57:07.696219 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/kube-rbac-proxy-node/0.log" Mar 08 21:57:07.696972 master-0 kubenswrapper[3962]: I0308 21:57:07.696865 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/ovn-acl-logging/0.log" Mar 08 21:57:07.697722 master-0 kubenswrapper[3962]: I0308 21:57:07.697692 3962 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m5qr_3624c541-56bf-4e7e-9460-6069eca194b2/ovn-controller/0.log" Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698395 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" exitCode=2 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698426 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" exitCode=0 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698438 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" exitCode=0 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698447 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" exitCode=0 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698456 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" exitCode=143 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698466 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" exitCode=143 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698474 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" exitCode=143 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698484 3962 generic.go:334] "Generic (PLEG): container finished" podID="3624c541-56bf-4e7e-9460-6069eca194b2" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" exitCode=143 Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698544 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} Mar 08 21:57:07.698576 master-0 kubenswrapper[3962]: I0308 21:57:07.698576 3962 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698618 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698648 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698667 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698671 3962 scope.go:117] "RemoveContainer" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698682 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698701 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698717 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698894 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698903 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698915 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698929 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698940 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698948 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698956 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698964 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698971 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698981 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698988 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.698995 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699005 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699017 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699025 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699032 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699040 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699046 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699053 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} Mar 08 21:57:07.701628 master-0 kubenswrapper[3962]: I0308 21:57:07.699060 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699066 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699076 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699102 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m5qr" event={"ID":"3624c541-56bf-4e7e-9460-6069eca194b2","Type":"ContainerDied","Data":"63b660af69f2087dc4c60773633358d5b6c0baf9d89578945f2e2d8011d5c68e"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699113 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699158 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699166 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699172 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699178 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699185 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699192 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699200 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.699217 3962 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.701001 3962 generic.go:334] "Generic (PLEG): container finished" podID="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" containerID="9c0dad4facbead9173c18e63c1454c1d466a90a1041e6859864e005008acb001" exitCode=0 Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.701083 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerDied","Data":"9c0dad4facbead9173c18e63c1454c1d466a90a1041e6859864e005008acb001"} Mar 08 21:57:07.704228 master-0 kubenswrapper[3962]: I0308 21:57:07.701168 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d"} Mar 08 21:57:07.755654 master-0 kubenswrapper[3962]: I0308 21:57:07.751955 3962 scope.go:117] "RemoveContainer" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.781583 master-0 kubenswrapper[3962]: I0308 21:57:07.781058 3962 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m5qr"] Mar 08 21:57:07.788152 master-0 kubenswrapper[3962]: I0308 21:57:07.788057 3962 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m5qr"] Mar 08 21:57:07.788968 master-0 kubenswrapper[3962]: I0308 21:57:07.788808 3962 scope.go:117] "RemoveContainer" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.823935 master-0 kubenswrapper[3962]: I0308 21:57:07.823884 3962 scope.go:117] "RemoveContainer" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.839638 master-0 kubenswrapper[3962]: I0308 21:57:07.839589 3962 scope.go:117] "RemoveContainer" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.868394 master-0 kubenswrapper[3962]: I0308 21:57:07.868339 3962 scope.go:117] "RemoveContainer" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.886821 master-0 kubenswrapper[3962]: I0308 21:57:07.886768 3962 scope.go:117] "RemoveContainer" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" Mar 08 21:57:07.907804 master-0 kubenswrapper[3962]: I0308 21:57:07.907756 3962 scope.go:117] "RemoveContainer" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" Mar 08 21:57:07.927433 master-0 kubenswrapper[3962]: I0308 21:57:07.927367 3962 scope.go:117] "RemoveContainer" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" Mar 08 21:57:07.942278 master-0 kubenswrapper[3962]: I0308 21:57:07.942146 3962 scope.go:117] "RemoveContainer" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.942840 master-0 kubenswrapper[3962]: E0308 21:57:07.942789 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": container with ID starting with 489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60 not found: ID does not exist" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.942896 master-0 kubenswrapper[3962]: I0308 21:57:07.942847 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} err="failed to get container status \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": rpc error: code = NotFound desc = could not find container \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": container with ID starting with 489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60 not found: ID does not exist" Mar 08 21:57:07.942896 master-0 kubenswrapper[3962]: I0308 21:57:07.942886 3962 scope.go:117] "RemoveContainer" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.943774 master-0 kubenswrapper[3962]: E0308 21:57:07.943715 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": container with ID starting with b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec not found: ID does not exist" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.943844 master-0 kubenswrapper[3962]: I0308 21:57:07.943806 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} err="failed to get container status \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": rpc error: code = NotFound desc = could not find container \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": container with ID starting with b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec not found: ID does not exist" Mar 08 21:57:07.943902 master-0 kubenswrapper[3962]: I0308 21:57:07.943877 3962 scope.go:117] "RemoveContainer" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.944602 master-0 kubenswrapper[3962]: E0308 21:57:07.944565 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": container with ID starting with 88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8 not found: ID does not exist" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.944667 master-0 kubenswrapper[3962]: I0308 21:57:07.944598 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} err="failed to get container status \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": rpc error: code = NotFound desc = could not find container \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": container with ID starting with 88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8 not found: ID does not exist" Mar 08 21:57:07.944667 master-0 kubenswrapper[3962]: I0308 21:57:07.944620 3962 scope.go:117] "RemoveContainer" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.945022 master-0 kubenswrapper[3962]: E0308 21:57:07.944984 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": container with ID starting with afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7 not found: ID does not exist" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.945094 master-0 kubenswrapper[3962]: I0308 21:57:07.945028 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} err="failed to get container status \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": rpc error: code = NotFound desc = could not find container \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": container with ID starting with afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7 not found: ID does not exist" Mar 08 21:57:07.945094 master-0 kubenswrapper[3962]: I0308 21:57:07.945057 3962 scope.go:117] "RemoveContainer" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.945687 master-0 kubenswrapper[3962]: E0308 21:57:07.945612 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": container with ID starting with 815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc not found: ID does not exist" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.945748 master-0 kubenswrapper[3962]: I0308 21:57:07.945685 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} err="failed to get container status \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": rpc error: code = NotFound desc = could not find container \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": container with ID starting with 815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc not found: ID does not exist" Mar 08 21:57:07.945748 master-0 kubenswrapper[3962]: I0308 21:57:07.945713 3962 scope.go:117] "RemoveContainer" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.946261 master-0 kubenswrapper[3962]: E0308 21:57:07.946229 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": container with ID starting with 1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a not found: ID does not exist" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.946321 master-0 kubenswrapper[3962]: I0308 21:57:07.946264 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} err="failed to get container status \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": rpc error: code = NotFound desc = could not find container \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": container with ID starting with 1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a not found: ID does not exist" Mar 08 21:57:07.946321 master-0 kubenswrapper[3962]: I0308 21:57:07.946285 3962 scope.go:117] "RemoveContainer" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" Mar 08 21:57:07.946788 master-0 kubenswrapper[3962]: E0308 21:57:07.946738 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": container with ID starting with debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5 not found: ID does not exist" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" Mar 08 21:57:07.946844 master-0 kubenswrapper[3962]: I0308 21:57:07.946796 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} err="failed to get container status \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": rpc error: code = NotFound desc = could not find container \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": container with ID starting with debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5 not found: ID does not exist" Mar 08 21:57:07.946844 master-0 kubenswrapper[3962]: I0308 21:57:07.946833 3962 scope.go:117] "RemoveContainer" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" Mar 08 21:57:07.947278 master-0 kubenswrapper[3962]: E0308 21:57:07.947249 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": container with ID starting with 08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8 not found: ID does not exist" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" Mar 08 21:57:07.947340 master-0 kubenswrapper[3962]: I0308 21:57:07.947279 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} err="failed to get container status \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": rpc error: code = NotFound desc = could not find container \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": container with ID starting with 08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8 not found: ID does not exist" Mar 08 21:57:07.947340 master-0 kubenswrapper[3962]: I0308 21:57:07.947298 3962 scope.go:117] "RemoveContainer" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" Mar 08 21:57:07.947783 master-0 kubenswrapper[3962]: E0308 21:57:07.947747 3962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": container with ID starting with 124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8 not found: ID does not exist" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" Mar 08 21:57:07.947844 master-0 kubenswrapper[3962]: I0308 21:57:07.947808 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} err="failed to get container status \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": rpc error: code = NotFound desc = could not find container \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": container with ID starting with 124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8 not found: ID does not exist" Mar 08 21:57:07.947844 master-0 kubenswrapper[3962]: I0308 21:57:07.947830 3962 scope.go:117] "RemoveContainer" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.948312 master-0 kubenswrapper[3962]: I0308 21:57:07.948281 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} err="failed to get container status \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": rpc error: code = NotFound desc = could not find container \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": container with ID starting with 489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60 not found: ID does not exist" Mar 08 21:57:07.948312 master-0 kubenswrapper[3962]: I0308 21:57:07.948310 3962 scope.go:117] "RemoveContainer" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.948795 master-0 kubenswrapper[3962]: I0308 21:57:07.948746 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} err="failed to get container status \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": rpc error: code = NotFound desc = could not find container \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": container with ID starting with b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec not found: ID does not exist" Mar 08 21:57:07.948851 master-0 kubenswrapper[3962]: I0308 21:57:07.948796 3962 scope.go:117] "RemoveContainer" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.949268 master-0 kubenswrapper[3962]: I0308 21:57:07.949233 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} err="failed to get container status \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": rpc error: code = NotFound desc = could not find container \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": container with ID starting with 88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8 not found: ID does not exist" Mar 08 21:57:07.949317 master-0 kubenswrapper[3962]: I0308 21:57:07.949288 3962 scope.go:117] "RemoveContainer" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.949804 master-0 kubenswrapper[3962]: I0308 21:57:07.949749 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} err="failed to get container status \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": rpc error: code = NotFound desc = could not find container \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": container with ID starting with afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7 not found: ID does not exist" Mar 08 21:57:07.949859 master-0 kubenswrapper[3962]: I0308 21:57:07.949798 3962 scope.go:117] "RemoveContainer" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.950247 master-0 kubenswrapper[3962]: I0308 21:57:07.950208 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} err="failed to get container status \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": rpc error: code = NotFound desc = could not find container \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": container with ID starting with 815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc not found: ID does not exist" Mar 08 21:57:07.950247 master-0 kubenswrapper[3962]: I0308 21:57:07.950240 3962 scope.go:117] "RemoveContainer" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.950637 master-0 kubenswrapper[3962]: I0308 21:57:07.950590 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} err="failed to get container status \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": rpc error: code = NotFound desc = could not find container \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": container with ID starting with 1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a not found: ID does not exist" Mar 08 21:57:07.950682 master-0 kubenswrapper[3962]: I0308 21:57:07.950642 3962 scope.go:117] "RemoveContainer" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" Mar 08 21:57:07.951186 master-0 kubenswrapper[3962]: I0308 21:57:07.951134 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} err="failed to get container status \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": rpc error: code = NotFound desc = could not find container \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": container with ID starting with debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5 not found: ID does not exist" Mar 08 21:57:07.951186 master-0 kubenswrapper[3962]: I0308 21:57:07.951179 3962 scope.go:117] "RemoveContainer" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" Mar 08 21:57:07.951571 master-0 kubenswrapper[3962]: I0308 21:57:07.951527 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} err="failed to get container status \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": rpc error: code = NotFound desc = could not find container \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": container with ID starting with 08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8 not found: ID does not exist" Mar 08 21:57:07.951571 master-0 kubenswrapper[3962]: I0308 21:57:07.951559 3962 scope.go:117] "RemoveContainer" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" Mar 08 21:57:07.952072 master-0 kubenswrapper[3962]: I0308 21:57:07.952028 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} err="failed to get container status \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": rpc error: code = NotFound desc = could not find container \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": container with ID starting with 124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8 not found: ID does not exist" Mar 08 21:57:07.952072 master-0 kubenswrapper[3962]: I0308 21:57:07.952058 3962 scope.go:117] "RemoveContainer" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.952508 master-0 kubenswrapper[3962]: I0308 21:57:07.952455 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} err="failed to get container status \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": rpc error: code = NotFound desc = could not find container \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": container with ID starting with 489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60 not found: ID does not exist" Mar 08 21:57:07.952508 master-0 kubenswrapper[3962]: I0308 21:57:07.952497 3962 scope.go:117] "RemoveContainer" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.952848 master-0 kubenswrapper[3962]: I0308 21:57:07.952810 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} err="failed to get container status \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": rpc error: code = NotFound desc = could not find container \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": container with ID starting with b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec not found: ID does not exist" Mar 08 21:57:07.952848 master-0 kubenswrapper[3962]: I0308 21:57:07.952841 3962 scope.go:117] "RemoveContainer" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.953245 master-0 kubenswrapper[3962]: I0308 21:57:07.953204 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} err="failed to get container status \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": rpc error: code = NotFound desc = could not find container \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": container with ID starting with 88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8 not found: ID does not exist" Mar 08 21:57:07.953245 master-0 kubenswrapper[3962]: I0308 21:57:07.953239 3962 scope.go:117] "RemoveContainer" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.953574 master-0 kubenswrapper[3962]: I0308 21:57:07.953532 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} err="failed to get container status \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": rpc error: code = NotFound desc = could not find container \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": container with ID starting with afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7 not found: ID does not exist" Mar 08 21:57:07.953574 master-0 kubenswrapper[3962]: I0308 21:57:07.953562 3962 scope.go:117] "RemoveContainer" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.953919 master-0 kubenswrapper[3962]: I0308 21:57:07.953877 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} err="failed to get container status \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": rpc error: code = NotFound desc = could not find container \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": container with ID starting with 815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc not found: ID does not exist" Mar 08 21:57:07.953919 master-0 kubenswrapper[3962]: I0308 21:57:07.953907 3962 scope.go:117] "RemoveContainer" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.954371 master-0 kubenswrapper[3962]: I0308 21:57:07.954326 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} err="failed to get container status \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": rpc error: code = NotFound desc = could not find container \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": container with ID starting with 1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a not found: ID does not exist" Mar 08 21:57:07.954371 master-0 kubenswrapper[3962]: I0308 21:57:07.954361 3962 scope.go:117] "RemoveContainer" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" Mar 08 21:57:07.954854 master-0 kubenswrapper[3962]: I0308 21:57:07.954814 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} err="failed to get container status \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": rpc error: code = NotFound desc = could not find container \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": container with ID starting with debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5 not found: ID does not exist" Mar 08 21:57:07.954854 master-0 kubenswrapper[3962]: I0308 21:57:07.954843 3962 scope.go:117] "RemoveContainer" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" Mar 08 21:57:07.955475 master-0 kubenswrapper[3962]: I0308 21:57:07.955443 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} err="failed to get container status \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": rpc error: code = NotFound desc = could not find container \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": container with ID starting with 08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8 not found: ID does not exist" Mar 08 21:57:07.955526 master-0 kubenswrapper[3962]: I0308 21:57:07.955472 3962 scope.go:117] "RemoveContainer" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" Mar 08 21:57:07.955915 master-0 kubenswrapper[3962]: I0308 21:57:07.955850 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} err="failed to get container status \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": rpc error: code = NotFound desc = could not find container \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": container with ID starting with 124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8 not found: ID does not exist" Mar 08 21:57:07.955958 master-0 kubenswrapper[3962]: I0308 21:57:07.955919 3962 scope.go:117] "RemoveContainer" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.956340 master-0 kubenswrapper[3962]: I0308 21:57:07.956296 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} err="failed to get container status \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": rpc error: code = NotFound desc = could not find container \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": container with ID starting with 489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60 not found: ID does not exist" Mar 08 21:57:07.956340 master-0 kubenswrapper[3962]: I0308 21:57:07.956333 3962 scope.go:117] "RemoveContainer" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.956690 master-0 kubenswrapper[3962]: I0308 21:57:07.956644 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} err="failed to get container status \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": rpc error: code = NotFound desc = could not find container \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": container with ID starting with b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec not found: ID does not exist" Mar 08 21:57:07.956690 master-0 kubenswrapper[3962]: I0308 21:57:07.956679 3962 scope.go:117] "RemoveContainer" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.956996 master-0 kubenswrapper[3962]: I0308 21:57:07.956952 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} err="failed to get container status \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": rpc error: code = NotFound desc = could not find container \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": container with ID starting with 88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8 not found: ID does not exist" Mar 08 21:57:07.956996 master-0 kubenswrapper[3962]: I0308 21:57:07.956988 3962 scope.go:117] "RemoveContainer" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.957339 master-0 kubenswrapper[3962]: I0308 21:57:07.957303 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} err="failed to get container status \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": rpc error: code = NotFound desc = could not find container \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": container with ID starting with afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7 not found: ID does not exist" Mar 08 21:57:07.957339 master-0 kubenswrapper[3962]: I0308 21:57:07.957331 3962 scope.go:117] "RemoveContainer" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.957648 master-0 kubenswrapper[3962]: I0308 21:57:07.957609 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} err="failed to get container status \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": rpc error: code = NotFound desc = could not find container \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": container with ID starting with 815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc not found: ID does not exist" Mar 08 21:57:07.957648 master-0 kubenswrapper[3962]: I0308 21:57:07.957638 3962 scope.go:117] "RemoveContainer" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.957955 master-0 kubenswrapper[3962]: I0308 21:57:07.957920 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} err="failed to get container status \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": rpc error: code = NotFound desc = could not find container \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": container with ID starting with 1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a not found: ID does not exist" Mar 08 21:57:07.957955 master-0 kubenswrapper[3962]: I0308 21:57:07.957945 3962 scope.go:117] "RemoveContainer" containerID="debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5" Mar 08 21:57:07.958278 master-0 kubenswrapper[3962]: I0308 21:57:07.958241 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5"} err="failed to get container status \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": rpc error: code = NotFound desc = could not find container \"debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5\": container with ID starting with debbd8578f893fc6340e8446a7b30d3a723ddff99cdbf4004a357ff70131cbe5 not found: ID does not exist" Mar 08 21:57:07.958278 master-0 kubenswrapper[3962]: I0308 21:57:07.958267 3962 scope.go:117] "RemoveContainer" containerID="08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8" Mar 08 21:57:07.958775 master-0 kubenswrapper[3962]: I0308 21:57:07.958698 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8"} err="failed to get container status \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": rpc error: code = NotFound desc = could not find container \"08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8\": container with ID starting with 08ef9565bb3a8b113fd0c09645509241d42bfe855ed16eb2829436e71a7d13d8 not found: ID does not exist" Mar 08 21:57:07.958823 master-0 kubenswrapper[3962]: I0308 21:57:07.958773 3962 scope.go:117] "RemoveContainer" containerID="124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8" Mar 08 21:57:07.959291 master-0 kubenswrapper[3962]: I0308 21:57:07.959252 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8"} err="failed to get container status \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": rpc error: code = NotFound desc = could not find container \"124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8\": container with ID starting with 124ddcbe8852d0f354bd50baf45937bf5acfc703f1acd39074e03c2c5bcba2e8 not found: ID does not exist" Mar 08 21:57:07.959291 master-0 kubenswrapper[3962]: I0308 21:57:07.959283 3962 scope.go:117] "RemoveContainer" containerID="489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60" Mar 08 21:57:07.959661 master-0 kubenswrapper[3962]: I0308 21:57:07.959611 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60"} err="failed to get container status \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": rpc error: code = NotFound desc = could not find container \"489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60\": container with ID starting with 489bbd09e7b24320803dcc90148c1b339c316965298925ea73eeab711638bc60 not found: ID does not exist" Mar 08 21:57:07.959661 master-0 kubenswrapper[3962]: I0308 21:57:07.959654 3962 scope.go:117] "RemoveContainer" containerID="b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec" Mar 08 21:57:07.960079 master-0 kubenswrapper[3962]: I0308 21:57:07.960024 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec"} err="failed to get container status \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": rpc error: code = NotFound desc = could not find container \"b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec\": container with ID starting with b0208fc87e77f06b6ca43a915fd3f60bcd5085f790d61def4bb86d725d73b2ec not found: ID does not exist" Mar 08 21:57:07.960079 master-0 kubenswrapper[3962]: I0308 21:57:07.960066 3962 scope.go:117] "RemoveContainer" containerID="88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8" Mar 08 21:57:07.960652 master-0 kubenswrapper[3962]: I0308 21:57:07.960600 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8"} err="failed to get container status \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": rpc error: code = NotFound desc = could not find container \"88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8\": container with ID starting with 88cd8b8884ac06eefd93cb63423276a9464bab8c6da022a546a7cc7e964bfce8 not found: ID does not exist" Mar 08 21:57:07.960652 master-0 kubenswrapper[3962]: I0308 21:57:07.960645 3962 scope.go:117] "RemoveContainer" containerID="afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7" Mar 08 21:57:07.961243 master-0 kubenswrapper[3962]: I0308 21:57:07.961203 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7"} err="failed to get container status \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": rpc error: code = NotFound desc = could not find container \"afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7\": container with ID starting with afdc71209428fdf412c146e6ec110e0d3310ddb2cd5d77a836e6f17f0e9852a7 not found: ID does not exist" Mar 08 21:57:07.961243 master-0 kubenswrapper[3962]: I0308 21:57:07.961235 3962 scope.go:117] "RemoveContainer" containerID="815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc" Mar 08 21:57:07.961740 master-0 kubenswrapper[3962]: I0308 21:57:07.961681 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc"} err="failed to get container status \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": rpc error: code = NotFound desc = could not find container \"815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc\": container with ID starting with 815bc3fd32c0904951c62a9fe0c4335a9f197af6c3fc8815cd51122d594c69fc not found: ID does not exist" Mar 08 21:57:07.961823 master-0 kubenswrapper[3962]: I0308 21:57:07.961758 3962 scope.go:117] "RemoveContainer" containerID="1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a" Mar 08 21:57:07.962396 master-0 kubenswrapper[3962]: I0308 21:57:07.962329 3962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a"} err="failed to get container status \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": rpc error: code = NotFound desc = could not find container \"1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a\": container with ID starting with 1881096d54ed7e1407d860e9acedf0c42ebc04e33fa0d1247fcedccd3446e67a not found: ID does not exist" Mar 08 21:57:08.186600 master-0 kubenswrapper[3962]: I0308 21:57:08.186533 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:08.186751 master-0 kubenswrapper[3962]: E0308 21:57:08.186714 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:08.712312 master-0 kubenswrapper[3962]: I0308 21:57:08.712073 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"01b662ddcad9510543c9dcc9932df7768b979fc609e31541baad3f6e71c738be"} Mar 08 21:57:08.712312 master-0 kubenswrapper[3962]: I0308 21:57:08.712169 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"e9c1e20ed9bedb939865dac300c7958ce6d0193156b71a6754079e06a20f4c89"} Mar 08 21:57:08.712312 master-0 kubenswrapper[3962]: I0308 21:57:08.712193 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"3b192f8314031425fe5254e1d012f49629ef523f84bd3270e86d481cd6843fc0"} Mar 08 21:57:08.712312 master-0 kubenswrapper[3962]: I0308 21:57:08.712210 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"a23a3557d33fe6a5a9e6280202be1cb13261d5f9b76e81ae2f08a8aac1599e14"} Mar 08 21:57:08.712312 master-0 kubenswrapper[3962]: I0308 21:57:08.712227 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"df48fc1dc00a53360dd1855fc01fcb1f1e56dd89236b218193c7e65caf253098"} Mar 08 21:57:08.712312 master-0 kubenswrapper[3962]: I0308 21:57:08.712243 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"e7868786f9174536b33680c3c4367751fff82b1f36f4e75683e156d299417e58"} Mar 08 21:57:09.188456 master-0 kubenswrapper[3962]: I0308 21:57:09.188383 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:09.188813 master-0 kubenswrapper[3962]: E0308 21:57:09.188543 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:09.198400 master-0 kubenswrapper[3962]: I0308 21:57:09.198349 3962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3624c541-56bf-4e7e-9460-6069eca194b2" path="/var/lib/kubelet/pods/3624c541-56bf-4e7e-9460-6069eca194b2/volumes" Mar 08 21:57:09.201808 master-0 kubenswrapper[3962]: I0308 21:57:09.201739 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 21:57:10.187394 master-0 kubenswrapper[3962]: I0308 21:57:10.187253 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:10.188360 master-0 kubenswrapper[3962]: E0308 21:57:10.187496 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:10.723437 master-0 kubenswrapper[3962]: I0308 21:57:10.723392 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"3f3a585720cb97b60eb8cbfaa667ccc12e6f29874fd7d55b67a47aea9a291100"} Mar 08 21:57:11.186643 master-0 kubenswrapper[3962]: I0308 21:57:11.186562 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:11.187182 master-0 kubenswrapper[3962]: E0308 21:57:11.187142 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:12.187991 master-0 kubenswrapper[3962]: I0308 21:57:12.187480 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:12.189141 master-0 kubenswrapper[3962]: E0308 21:57:12.188344 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:12.939598 master-0 kubenswrapper[3962]: I0308 21:57:12.939527 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:12.939766 master-0 kubenswrapper[3962]: E0308 21:57:12.939723 3962 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:12.939837 master-0 kubenswrapper[3962]: E0308 21:57:12.939815 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:58:16.939785308 +0000 UTC m=+164.573057550 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:13.041678 master-0 kubenswrapper[3962]: I0308 21:57:13.041583 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:13.041897 master-0 kubenswrapper[3962]: E0308 21:57:13.041866 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 21:57:13.042124 master-0 kubenswrapper[3962]: E0308 21:57:13.041907 3962 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 21:57:13.042124 master-0 kubenswrapper[3962]: E0308 21:57:13.041932 3962 projected.go:194] Error preparing data for projected volume kube-api-access-l5xq4 for pod openshift-network-diagnostics/network-check-target-djlff: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:57:13.042124 master-0 kubenswrapper[3962]: E0308 21:57:13.042021 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4 podName:f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e nodeName:}" failed. No retries permitted until 2026-03-08 21:57:45.041994314 +0000 UTC m=+132.675266666 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l5xq4" (UniqueName: "kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4") pod "network-check-target-djlff" (UID: "f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 21:57:13.188744 master-0 kubenswrapper[3962]: I0308 21:57:13.188661 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:13.190126 master-0 kubenswrapper[3962]: E0308 21:57:13.188789 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:13.312486 master-0 kubenswrapper[3962]: I0308 21:57:13.312220 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=4.312191477 podStartE2EDuration="4.312191477s" podCreationTimestamp="2026-03-08 21:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:13.310747409 +0000 UTC m=+100.944019621" watchObservedRunningTime="2026-03-08 21:57:13.312191477 +0000 UTC m=+100.945463719" Mar 08 21:57:13.741800 master-0 kubenswrapper[3962]: I0308 21:57:13.741742 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"a83c2447e70960fbdfe950dd6467011dacf3bf1df2039d80bb85ed744ae22114"} Mar 08 21:57:13.742317 master-0 kubenswrapper[3962]: I0308 21:57:13.742260 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:13.742473 master-0 kubenswrapper[3962]: I0308 21:57:13.742453 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:13.742594 master-0 kubenswrapper[3962]: I0308 21:57:13.742574 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:13.820613 master-0 kubenswrapper[3962]: I0308 21:57:13.820528 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" podStartSLOduration=6.820497108 podStartE2EDuration="6.820497108s" podCreationTimestamp="2026-03-08 21:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:13.820459097 +0000 UTC m=+101.453731389" watchObservedRunningTime="2026-03-08 21:57:13.820497108 +0000 UTC m=+101.453769340" Mar 08 21:57:13.825428 master-0 kubenswrapper[3962]: I0308 21:57:13.825360 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:13.827481 master-0 kubenswrapper[3962]: I0308 21:57:13.827411 3962 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:13.917654 master-0 kubenswrapper[3962]: I0308 21:57:13.917590 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-djlff"] Mar 08 21:57:13.917896 master-0 kubenswrapper[3962]: I0308 21:57:13.917734 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:13.917896 master-0 kubenswrapper[3962]: E0308 21:57:13.917862 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:13.920380 master-0 kubenswrapper[3962]: I0308 21:57:13.920335 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lqdbv"] Mar 08 21:57:13.920674 master-0 kubenswrapper[3962]: I0308 21:57:13.920647 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:13.920957 master-0 kubenswrapper[3962]: E0308 21:57:13.920916 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:15.186924 master-0 kubenswrapper[3962]: I0308 21:57:15.186821 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:15.188212 master-0 kubenswrapper[3962]: I0308 21:57:15.186868 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:15.188212 master-0 kubenswrapper[3962]: E0308 21:57:15.187032 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:15.188212 master-0 kubenswrapper[3962]: E0308 21:57:15.187153 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:17.187471 master-0 kubenswrapper[3962]: I0308 21:57:17.187362 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:17.188482 master-0 kubenswrapper[3962]: I0308 21:57:17.187378 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:17.188482 master-0 kubenswrapper[3962]: E0308 21:57:17.187547 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-djlff" podUID="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" Mar 08 21:57:17.188482 master-0 kubenswrapper[3962]: E0308 21:57:17.187682 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lqdbv" podUID="44e67e41-045e-42ef-8f60-6ef15606d6a2" Mar 08 21:57:18.209250 master-0 kubenswrapper[3962]: I0308 21:57:18.209146 3962 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 08 21:57:18.210298 master-0 kubenswrapper[3962]: I0308 21:57:18.209396 3962 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 08 21:57:18.258544 master-0 kubenswrapper[3962]: I0308 21:57:18.258291 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8"] Mar 08 21:57:18.258946 master-0 kubenswrapper[3962]: I0308 21:57:18.258891 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wtvp5"] Mar 08 21:57:18.258946 master-0 kubenswrapper[3962]: I0308 21:57:18.258936 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.260201 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-krpfs"] Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.260779 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.261273 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w"] Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.261825 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x"] Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.262301 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2"] Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.262323 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.262438 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.262585 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.264136 master-0 kubenswrapper[3962]: I0308 21:57:18.263472 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.264783 master-0 kubenswrapper[3962]: I0308 21:57:18.264522 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr"] Mar 08 21:57:18.267040 master-0 kubenswrapper[3962]: I0308 21:57:18.265068 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.267040 master-0 kubenswrapper[3962]: I0308 21:57:18.265480 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484"] Mar 08 21:57:18.267040 master-0 kubenswrapper[3962]: I0308 21:57:18.265727 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.275192 master-0 kubenswrapper[3962]: I0308 21:57:18.275114 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k"] Mar 08 21:57:18.275548 master-0 kubenswrapper[3962]: I0308 21:57:18.275496 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.277825 master-0 kubenswrapper[3962]: I0308 21:57:18.277740 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg"] Mar 08 21:57:18.278167 master-0 kubenswrapper[3962]: I0308 21:57:18.278031 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:18.278884 master-0 kubenswrapper[3962]: I0308 21:57:18.278839 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-cjdgr"] Mar 08 21:57:18.279243 master-0 kubenswrapper[3962]: I0308 21:57:18.279204 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.280878 master-0 kubenswrapper[3962]: I0308 21:57:18.280773 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281210 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281275 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281352 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281533 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281622 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281634 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281747 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281837 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.281930 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282014 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282139 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282234 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282319 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282368 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282461 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282571 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2"] Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282576 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282675 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 21:57:18.284127 master-0 kubenswrapper[3962]: I0308 21:57:18.282882 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:18.290111 master-0 kubenswrapper[3962]: I0308 21:57:18.286906 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 21:57:18.290111 master-0 kubenswrapper[3962]: I0308 21:57:18.288214 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw"] Mar 08 21:57:18.290111 master-0 kubenswrapper[3962]: I0308 21:57:18.288832 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.290477 master-0 kubenswrapper[3962]: I0308 21:57:18.290295 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr"] Mar 08 21:57:18.293390 master-0 kubenswrapper[3962]: I0308 21:57:18.290933 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:18.293390 master-0 kubenswrapper[3962]: I0308 21:57:18.293385 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 21:57:18.293573 master-0 kubenswrapper[3962]: I0308 21:57:18.293494 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5"] Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.293745 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.293789 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c"] Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.293815 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.293993 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx"] Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.294311 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.294876 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.297113 master-0 kubenswrapper[3962]: I0308 21:57:18.295137 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.297463 master-0 kubenswrapper[3962]: I0308 21:57:18.297406 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.298462 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.298752 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.298768 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.298853 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddw98"] Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.299119 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.299302 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.299470 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf"] Mar 08 21:57:18.299591 master-0 kubenswrapper[3962]: I0308 21:57:18.299483 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 21:57:18.299888 master-0 kubenswrapper[3962]: I0308 21:57:18.299652 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 21:57:18.299888 master-0 kubenswrapper[3962]: I0308 21:57:18.299705 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg"] Mar 08 21:57:18.299958 master-0 kubenswrapper[3962]: I0308 21:57:18.299890 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.299958 master-0 kubenswrapper[3962]: I0308 21:57:18.299924 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:18.315190 master-0 kubenswrapper[3962]: I0308 21:57:18.299892 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 21:57:18.319177 master-0 kubenswrapper[3962]: I0308 21:57:18.317538 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 21:57:18.319177 master-0 kubenswrapper[3962]: I0308 21:57:18.317631 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 21:57:18.319177 master-0 kubenswrapper[3962]: I0308 21:57:18.317999 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 21:57:18.319177 master-0 kubenswrapper[3962]: I0308 21:57:18.318473 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 21:57:18.324086 master-0 kubenswrapper[3962]: I0308 21:57:18.324013 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 21:57:18.324208 master-0 kubenswrapper[3962]: I0308 21:57:18.318747 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 21:57:18.324465 master-0 kubenswrapper[3962]: I0308 21:57:18.324430 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 21:57:18.324548 master-0 kubenswrapper[3962]: I0308 21:57:18.324502 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 21:57:18.324649 master-0 kubenswrapper[3962]: I0308 21:57:18.324615 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.324733 master-0 kubenswrapper[3962]: I0308 21:57:18.324713 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 21:57:18.324895 master-0 kubenswrapper[3962]: I0308 21:57:18.324862 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.324526 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25"] Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.326518 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.327307 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.327715 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.328125 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.328337 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 21:57:18.328484 master-0 kubenswrapper[3962]: I0308 21:57:18.328453 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331417 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331499 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331637 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331743 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331790 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331760 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331877 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331885 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.331932 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.332019 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.332063 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.332140 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 21:57:18.332550 master-0 kubenswrapper[3962]: I0308 21:57:18.332108 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.336171 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.336656 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.336795 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.336979 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.337012 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.337197 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8"] Mar 08 21:57:18.337205 master-0 kubenswrapper[3962]: I0308 21:57:18.337208 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.337530 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w"] Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.337784 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.337946 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.338177 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh"] Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.338561 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.338939 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-krpfs"] Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.339296 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.339757 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 21:57:18.340293 master-0 kubenswrapper[3962]: I0308 21:57:18.339874 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wtvp5"] Mar 08 21:57:18.343646 master-0 kubenswrapper[3962]: I0308 21:57:18.343364 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 21:57:18.343975 master-0 kubenswrapper[3962]: I0308 21:57:18.343942 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 21:57:18.344126 master-0 kubenswrapper[3962]: I0308 21:57:18.344101 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 21:57:18.344126 master-0 kubenswrapper[3962]: I0308 21:57:18.344068 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 21:57:18.345354 master-0 kubenswrapper[3962]: I0308 21:57:18.345169 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 21:57:18.345354 master-0 kubenswrapper[3962]: I0308 21:57:18.345295 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x"] Mar 08 21:57:18.346688 master-0 kubenswrapper[3962]: I0308 21:57:18.346029 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2"] Mar 08 21:57:18.346688 master-0 kubenswrapper[3962]: I0308 21:57:18.346672 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg"] Mar 08 21:57:18.349707 master-0 kubenswrapper[3962]: I0308 21:57:18.349372 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-cjdgr"] Mar 08 21:57:18.352351 master-0 kubenswrapper[3962]: I0308 21:57:18.352242 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5"] Mar 08 21:57:18.352351 master-0 kubenswrapper[3962]: I0308 21:57:18.352311 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx"] Mar 08 21:57:18.353208 master-0 kubenswrapper[3962]: I0308 21:57:18.352815 3962 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-pwn9k"] Mar 08 21:57:18.353475 master-0 kubenswrapper[3962]: I0308 21:57:18.353392 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.353475 master-0 kubenswrapper[3962]: I0308 21:57:18.353404 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddw98"] Mar 08 21:57:18.360578 master-0 kubenswrapper[3962]: I0308 21:57:18.360326 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 21:57:18.367547 master-0 kubenswrapper[3962]: I0308 21:57:18.360756 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 21:57:18.367547 master-0 kubenswrapper[3962]: I0308 21:57:18.361981 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484"] Mar 08 21:57:18.367547 master-0 kubenswrapper[3962]: I0308 21:57:18.363511 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c"] Mar 08 21:57:18.367547 master-0 kubenswrapper[3962]: I0308 21:57:18.363556 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k"] Mar 08 21:57:18.367547 master-0 kubenswrapper[3962]: I0308 21:57:18.363626 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 21:57:18.377659 master-0 kubenswrapper[3962]: I0308 21:57:18.370834 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr"] Mar 08 21:57:18.377659 master-0 kubenswrapper[3962]: I0308 21:57:18.371848 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg"] Mar 08 21:57:18.377659 master-0 kubenswrapper[3962]: I0308 21:57:18.374777 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr"] Mar 08 21:57:18.377659 master-0 kubenswrapper[3962]: I0308 21:57:18.376738 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2"] Mar 08 21:57:18.378228 master-0 kubenswrapper[3962]: I0308 21:57:18.377970 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw"] Mar 08 21:57:18.379492 master-0 kubenswrapper[3962]: I0308 21:57:18.379462 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh"] Mar 08 21:57:18.382302 master-0 kubenswrapper[3962]: I0308 21:57:18.382278 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf"] Mar 08 21:57:18.383438 master-0 kubenswrapper[3962]: I0308 21:57:18.383425 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25"] Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399260 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399318 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399352 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399394 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399420 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399440 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399464 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399548 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399702 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399727 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399773 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399795 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399812 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.399394 master-0 kubenswrapper[3962]: I0308 21:57:18.399831 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399847 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399866 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399882 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399904 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399920 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399937 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399953 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.399970 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.400001 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.400036 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.400066 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.400122 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.400157 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.402373 master-0 kubenswrapper[3962]: I0308 21:57:18.400178 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400205 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400232 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400275 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400294 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400409 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400462 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400496 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400603 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400637 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400665 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400690 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400713 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400736 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.403228 master-0 kubenswrapper[3962]: I0308 21:57:18.400835 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.400904 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.400929 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.400972 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.400994 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401011 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401062 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401131 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401153 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401190 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401208 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401222 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401271 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401297 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.404055 master-0 kubenswrapper[3962]: I0308 21:57:18.401340 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.404683 master-0 kubenswrapper[3962]: I0308 21:57:18.401356 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.404683 master-0 kubenswrapper[3962]: I0308 21:57:18.401372 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.404683 master-0 kubenswrapper[3962]: I0308 21:57:18.401386 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.404683 master-0 kubenswrapper[3962]: I0308 21:57:18.401419 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.404683 master-0 kubenswrapper[3962]: I0308 21:57:18.401435 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:18.501925 master-0 kubenswrapper[3962]: I0308 21:57:18.501892 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:18.502001 master-0 kubenswrapper[3962]: I0308 21:57:18.501929 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.502001 master-0 kubenswrapper[3962]: I0308 21:57:18.501952 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.502001 master-0 kubenswrapper[3962]: I0308 21:57:18.501969 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.502334 master-0 kubenswrapper[3962]: E0308 21:57:18.502318 3962 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:18.502450 master-0 kubenswrapper[3962]: I0308 21:57:18.502373 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.502504 master-0 kubenswrapper[3962]: E0308 21:57:18.502478 3962 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:18.502553 master-0 kubenswrapper[3962]: E0308 21:57:18.502541 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.002402342 +0000 UTC m=+106.635674544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:18.504116 master-0 kubenswrapper[3962]: I0308 21:57:18.504097 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.504211 master-0 kubenswrapper[3962]: E0308 21:57:18.504117 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.004091016 +0000 UTC m=+106.637363218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:18.504278 master-0 kubenswrapper[3962]: I0308 21:57:18.504265 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:18.504343 master-0 kubenswrapper[3962]: I0308 21:57:18.504331 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.504414 master-0 kubenswrapper[3962]: I0308 21:57:18.504399 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:18.504580 master-0 kubenswrapper[3962]: I0308 21:57:18.504566 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.504659 master-0 kubenswrapper[3962]: I0308 21:57:18.504646 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.504834 master-0 kubenswrapper[3962]: I0308 21:57:18.504800 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.504884 master-0 kubenswrapper[3962]: I0308 21:57:18.504850 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.504944 master-0 kubenswrapper[3962]: I0308 21:57:18.504907 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.505878 master-0 kubenswrapper[3962]: I0308 21:57:18.505565 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.505878 master-0 kubenswrapper[3962]: I0308 21:57:18.505587 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.505878 master-0 kubenswrapper[3962]: I0308 21:57:18.505692 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.505878 master-0 kubenswrapper[3962]: I0308 21:57:18.505832 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.505878 master-0 kubenswrapper[3962]: I0308 21:57:18.505865 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.506066 master-0 kubenswrapper[3962]: I0308 21:57:18.505892 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.506066 master-0 kubenswrapper[3962]: I0308 21:57:18.505919 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.506066 master-0 kubenswrapper[3962]: I0308 21:57:18.505948 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.506066 master-0 kubenswrapper[3962]: I0308 21:57:18.505950 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.506066 master-0 kubenswrapper[3962]: I0308 21:57:18.506016 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.506066 master-0 kubenswrapper[3962]: I0308 21:57:18.506040 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.506235 master-0 kubenswrapper[3962]: I0308 21:57:18.506099 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:18.506235 master-0 kubenswrapper[3962]: I0308 21:57:18.506146 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.506235 master-0 kubenswrapper[3962]: I0308 21:57:18.506185 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.506235 master-0 kubenswrapper[3962]: I0308 21:57:18.506200 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.506235 master-0 kubenswrapper[3962]: I0308 21:57:18.506202 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:18.506235 master-0 kubenswrapper[3962]: I0308 21:57:18.506232 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.506378 master-0 kubenswrapper[3962]: I0308 21:57:18.506265 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.506378 master-0 kubenswrapper[3962]: E0308 21:57:18.506283 3962 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:18.506378 master-0 kubenswrapper[3962]: E0308 21:57:18.506326 3962 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:18.506378 master-0 kubenswrapper[3962]: E0308 21:57:18.506351 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.006336415 +0000 UTC m=+106.639608617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:18.506378 master-0 kubenswrapper[3962]: E0308 21:57:18.506365 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.006359595 +0000 UTC m=+106.639631797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:18.506378 master-0 kubenswrapper[3962]: I0308 21:57:18.506285 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.506532 master-0 kubenswrapper[3962]: I0308 21:57:18.506389 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.506532 master-0 kubenswrapper[3962]: I0308 21:57:18.506431 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:18.506532 master-0 kubenswrapper[3962]: I0308 21:57:18.506452 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.506532 master-0 kubenswrapper[3962]: I0308 21:57:18.506471 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.506532 master-0 kubenswrapper[3962]: I0308 21:57:18.506511 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.506532 master-0 kubenswrapper[3962]: I0308 21:57:18.506530 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506547 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506590 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506607 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506627 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506663 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506664 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506700 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506740 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506764 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506787 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506813 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506834 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506858 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506881 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.515772 master-0 kubenswrapper[3962]: I0308 21:57:18.506905 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.506936 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.506969 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.506995 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507019 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507043 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507088 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507093 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507150 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507177 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507204 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507227 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507250 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507272 3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: E0308 21:57:18.506452 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:18.516185 master-0 kubenswrapper[3962]: I0308 21:57:18.507409 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: E0308 21:57:18.507432 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.007405243 +0000 UTC m=+106.640677465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507469 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507505 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507533 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507566 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507595 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507619 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507643 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.507667 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.508086 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: E0308 21:57:18.507176 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.508208 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.508708 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.509530 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.516567 master-0 kubenswrapper[3962]: I0308 21:57:18.509631 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.510622 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.513097 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.513172 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.513389 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.513447 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.513455 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.013442579 +0000 UTC m=+106.646714781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.513546 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.513599 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.013580603 +0000 UTC m=+106.646852805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.513653 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.513683 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.013674425 +0000 UTC m=+106.646946747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.513759 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.513843 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: I0308 21:57:18.514355 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.514516 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:18.516923 master-0 kubenswrapper[3962]: E0308 21:57:18.514531 3962 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: E0308 21:57:18.514559 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.014546998 +0000 UTC m=+106.647819210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: E0308 21:57:18.514588 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.014570019 +0000 UTC m=+106.647842331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.514646 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.514933 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.514978 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.515010 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.515329 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.515640 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.517321 master-0 kubenswrapper[3962]: I0308 21:57:18.515971 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.517899 master-0 kubenswrapper[3962]: I0308 21:57:18.517824 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.517899 master-0 kubenswrapper[3962]: I0308 21:57:18.517826 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.518279 master-0 kubenswrapper[3962]: I0308 21:57:18.518240 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.519297 master-0 kubenswrapper[3962]: I0308 21:57:18.519261 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.536473 master-0 kubenswrapper[3962]: I0308 21:57:18.535287 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:18.536473 master-0 kubenswrapper[3962]: I0308 21:57:18.536473 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.537581 master-0 kubenswrapper[3962]: I0308 21:57:18.537537 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:18.538280 master-0 kubenswrapper[3962]: I0308 21:57:18.538239 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.543737 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.544013 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.544269 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.544382 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.544457 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.544760 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.547071 master-0 kubenswrapper[3962]: I0308 21:57:18.546598 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.556318 master-0 kubenswrapper[3962]: I0308 21:57:18.556271 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.579601 master-0 kubenswrapper[3962]: I0308 21:57:18.579578 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:18.580499 master-0 kubenswrapper[3962]: I0308 21:57:18.580384 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:18.596038 master-0 kubenswrapper[3962]: I0308 21:57:18.595784 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:18.601531 master-0 kubenswrapper[3962]: I0308 21:57:18.601445 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.609142 master-0 kubenswrapper[3962]: I0308 21:57:18.609091 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.609231 master-0 kubenswrapper[3962]: I0308 21:57:18.609150 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.609231 master-0 kubenswrapper[3962]: I0308 21:57:18.609204 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.609822 master-0 kubenswrapper[3962]: I0308 21:57:18.609302 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.609822 master-0 kubenswrapper[3962]: E0308 21:57:18.609485 3962 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:18.609822 master-0 kubenswrapper[3962]: E0308 21:57:18.609576 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:19.109541507 +0000 UTC m=+106.742813709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:18.609822 master-0 kubenswrapper[3962]: I0308 21:57:18.609807 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.609989 master-0 kubenswrapper[3962]: I0308 21:57:18.609855 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.610250 master-0 kubenswrapper[3962]: I0308 21:57:18.610199 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.610250 master-0 kubenswrapper[3962]: I0308 21:57:18.610210 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.610679 master-0 kubenswrapper[3962]: I0308 21:57:18.610643 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.610679 master-0 kubenswrapper[3962]: I0308 21:57:18.610676 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.611069 master-0 kubenswrapper[3962]: I0308 21:57:18.611044 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.611069 master-0 kubenswrapper[3962]: I0308 21:57:18.611040 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.614353 master-0 kubenswrapper[3962]: I0308 21:57:18.614167 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.614487 master-0 kubenswrapper[3962]: I0308 21:57:18.614465 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.618429 master-0 kubenswrapper[3962]: I0308 21:57:18.618355 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.630679 master-0 kubenswrapper[3962]: I0308 21:57:18.630580 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:18.639258 master-0 kubenswrapper[3962]: I0308 21:57:18.639134 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:18.660764 master-0 kubenswrapper[3962]: I0308 21:57:18.660168 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:18.679312 master-0 kubenswrapper[3962]: I0308 21:57:18.679264 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:18.684265 master-0 kubenswrapper[3962]: I0308 21:57:18.684222 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.700564 master-0 kubenswrapper[3962]: I0308 21:57:18.700311 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.714555 master-0 kubenswrapper[3962]: I0308 21:57:18.714364 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:18.719155 master-0 kubenswrapper[3962]: I0308 21:57:18.719114 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:18.739750 master-0 kubenswrapper[3962]: I0308 21:57:18.739390 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:18.744802 master-0 kubenswrapper[3962]: I0308 21:57:18.744757 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:18.760459 master-0 kubenswrapper[3962]: I0308 21:57:18.760419 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:18.779488 master-0 kubenswrapper[3962]: I0308 21:57:18.779275 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:18.793606 master-0 kubenswrapper[3962]: I0308 21:57:18.786332 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.793606 master-0 kubenswrapper[3962]: I0308 21:57:18.793338 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:18.804171 master-0 kubenswrapper[3962]: I0308 21:57:18.800368 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:18.809282 master-0 kubenswrapper[3962]: I0308 21:57:18.809241 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:18.811596 master-0 kubenswrapper[3962]: I0308 21:57:18.811568 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:18.824458 master-0 kubenswrapper[3962]: I0308 21:57:18.818269 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg"] Mar 08 21:57:18.830241 master-0 kubenswrapper[3962]: I0308 21:57:18.830046 3962 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.841147 master-0 kubenswrapper[3962]: I0308 21:57:18.841103 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8"] Mar 08 21:57:18.879460 master-0 kubenswrapper[3962]: I0308 21:57:18.877131 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-krpfs"] Mar 08 21:57:18.889348 master-0 kubenswrapper[3962]: I0308 21:57:18.887613 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:18.898278 master-0 kubenswrapper[3962]: I0308 21:57:18.897601 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w"] Mar 08 21:57:18.908142 master-0 kubenswrapper[3962]: I0308 21:57:18.904824 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:18.950119 master-0 kubenswrapper[3962]: I0308 21:57:18.948246 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2"] Mar 08 21:57:18.958838 master-0 kubenswrapper[3962]: W0308 21:57:18.958782 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb849f992_1020_4633_98be_75705b962fa9.slice/crio-60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986 WatchSource:0}: Error finding container 60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986: Status 404 returned error can't find the container with id 60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986 Mar 08 21:57:19.013677 master-0 kubenswrapper[3962]: I0308 21:57:19.013635 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k"] Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016030 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: E0308 21:57:19.016219 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016256 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: E0308 21:57:19.016286 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.016264168 +0000 UTC m=+107.649536450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016313 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: E0308 21:57:19.016367 3962 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: E0308 21:57:19.016410 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.016396851 +0000 UTC m=+107.649669053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016430 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: E0308 21:57:19.016458 3962 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: E0308 21:57:19.016491 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.016482103 +0000 UTC m=+107.649754405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016457 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016538 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016567 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016605 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:19.019683 master-0 kubenswrapper[3962]: I0308 21:57:19.016656 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: I0308 21:57:19.016685 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016496 3962 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016817 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.016808232 +0000 UTC m=+107.650080554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016862 3962 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016887 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.016877553 +0000 UTC m=+107.650149855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016930 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016954 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.016946005 +0000 UTC m=+107.650218207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016999 3962 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.017023 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.017016258 +0000 UTC m=+107.650288450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.017089 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.017124 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.01711624 +0000 UTC m=+107.650388522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016526 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.017152 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.017145111 +0000 UTC m=+107.650417313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:19.020557 master-0 kubenswrapper[3962]: E0308 21:57:19.016783 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:19.021000 master-0 kubenswrapper[3962]: E0308 21:57:19.017177 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.017171072 +0000 UTC m=+107.650443284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:19.081821 master-0 kubenswrapper[3962]: I0308 21:57:19.081564 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx"] Mar 08 21:57:19.097346 master-0 kubenswrapper[3962]: I0308 21:57:19.097144 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw"] Mar 08 21:57:19.118179 master-0 kubenswrapper[3962]: I0308 21:57:19.118131 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:19.118353 master-0 kubenswrapper[3962]: E0308 21:57:19.118294 3962 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:19.118450 master-0 kubenswrapper[3962]: E0308 21:57:19.118421 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:20.118376422 +0000 UTC m=+107.751648714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:19.124943 master-0 kubenswrapper[3962]: I0308 21:57:19.124855 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c"] Mar 08 21:57:19.136456 master-0 kubenswrapper[3962]: I0308 21:57:19.135464 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5"] Mar 08 21:57:19.136456 master-0 kubenswrapper[3962]: I0308 21:57:19.135839 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg"] Mar 08 21:57:19.141169 master-0 kubenswrapper[3962]: W0308 21:57:19.141135 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d851f97_b21e_432e_a4c3_dc0a8ff00e84.slice/crio-44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5 WatchSource:0}: Error finding container 44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5: Status 404 returned error can't find the container with id 44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5 Mar 08 21:57:19.155600 master-0 kubenswrapper[3962]: I0308 21:57:19.153820 3962 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25"] Mar 08 21:57:19.155600 master-0 kubenswrapper[3962]: W0308 21:57:19.153848 3962 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37bf82cb_adea_46d3_a899_136eb1d1f292.slice/crio-362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3 WatchSource:0}: Error finding container 362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3: Status 404 returned error can't find the container with id 362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3 Mar 08 21:57:19.166472 master-0 kubenswrapper[3962]: E0308 21:57:19.165675 3962 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h4vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000310000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-77899cf6d-mnf25_openshift-cluster-olm-operator(de89c423-0f2a-440f-9fa9-92fefea84b09): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 08 21:57:19.166982 master-0 kubenswrapper[3962]: E0308 21:57:19.166862 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" podUID="de89c423-0f2a-440f-9fa9-92fefea84b09" Mar 08 21:57:19.189284 master-0 kubenswrapper[3962]: I0308 21:57:19.188793 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:19.189624 master-0 kubenswrapper[3962]: I0308 21:57:19.189580 3962 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:19.201581 master-0 kubenswrapper[3962]: I0308 21:57:19.201548 3962 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 21:57:19.241770 master-0 kubenswrapper[3962]: I0308 21:57:19.241730 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 21:57:19.261445 master-0 kubenswrapper[3962]: I0308 21:57:19.261159 3962 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 21:57:19.779724 master-0 kubenswrapper[3962]: I0308 21:57:19.779551 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerStarted","Data":"503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20"} Mar 08 21:57:19.782285 master-0 kubenswrapper[3962]: I0308 21:57:19.782242 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerStarted","Data":"39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5"} Mar 08 21:57:19.783426 master-0 kubenswrapper[3962]: I0308 21:57:19.783386 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerStarted","Data":"6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd"} Mar 08 21:57:19.785578 master-0 kubenswrapper[3962]: E0308 21:57:19.785389 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" podUID="de89c423-0f2a-440f-9fa9-92fefea84b09" Mar 08 21:57:19.785863 master-0 kubenswrapper[3962]: I0308 21:57:19.785836 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerStarted","Data":"dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184"} Mar 08 21:57:19.822496 master-0 kubenswrapper[3962]: I0308 21:57:19.822305 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerStarted","Data":"44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5"} Mar 08 21:57:19.834487 master-0 kubenswrapper[3962]: I0308 21:57:19.834389 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerStarted","Data":"60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986"} Mar 08 21:57:19.849125 master-0 kubenswrapper[3962]: I0308 21:57:19.848268 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerStarted","Data":"b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b"} Mar 08 21:57:19.864321 master-0 kubenswrapper[3962]: I0308 21:57:19.864217 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerStarted","Data":"e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2"} Mar 08 21:57:19.868512 master-0 kubenswrapper[3962]: I0308 21:57:19.868465 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerStarted","Data":"00c5ed3578644c2cfcf3b05743187fa1a4e66cf46b816a9e956e779028d0b36b"} Mar 08 21:57:19.868581 master-0 kubenswrapper[3962]: I0308 21:57:19.868526 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerStarted","Data":"6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4"} Mar 08 21:57:19.872751 master-0 kubenswrapper[3962]: I0308 21:57:19.872713 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerStarted","Data":"362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3"} Mar 08 21:57:19.874346 master-0 kubenswrapper[3962]: I0308 21:57:19.874323 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-pwn9k" event={"ID":"b358dcb7-d01f-4206-b636-b55a599a73bd","Type":"ContainerStarted","Data":"2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe"} Mar 08 21:57:19.876203 master-0 kubenswrapper[3962]: I0308 21:57:19.876174 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerStarted","Data":"427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4"} Mar 08 21:57:19.878010 master-0 kubenswrapper[3962]: I0308 21:57:19.877937 3962 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerStarted","Data":"e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266"} Mar 08 21:57:19.886121 master-0 kubenswrapper[3962]: I0308 21:57:19.886047 3962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" podStartSLOduration=72.886029833 podStartE2EDuration="1m12.886029833s" podCreationTimestamp="2026-03-08 21:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:19.884552875 +0000 UTC m=+107.517825087" watchObservedRunningTime="2026-03-08 21:57:19.886029833 +0000 UTC m=+107.519302035" Mar 08 21:57:20.030248 master-0 kubenswrapper[3962]: I0308 21:57:20.030120 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:20.030248 master-0 kubenswrapper[3962]: I0308 21:57:20.030188 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:20.030464 master-0 kubenswrapper[3962]: E0308 21:57:20.030344 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:20.030533 master-0 kubenswrapper[3962]: I0308 21:57:20.030466 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:20.030621 master-0 kubenswrapper[3962]: E0308 21:57:20.030601 3962 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:20.030653 master-0 kubenswrapper[3962]: I0308 21:57:20.030634 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:20.030681 master-0 kubenswrapper[3962]: E0308 21:57:20.030670 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.030652162 +0000 UTC m=+109.663924374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:20.030757 master-0 kubenswrapper[3962]: I0308 21:57:20.030713 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:20.030757 master-0 kubenswrapper[3962]: E0308 21:57:20.030753 3962 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:20.030838 master-0 kubenswrapper[3962]: I0308 21:57:20.030772 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:20.030838 master-0 kubenswrapper[3962]: E0308 21:57:20.030785 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.030777146 +0000 UTC m=+109.664049368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:20.030838 master-0 kubenswrapper[3962]: I0308 21:57:20.030809 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:20.030838 master-0 kubenswrapper[3962]: E0308 21:57:20.030831 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:20.030990 master-0 kubenswrapper[3962]: E0308 21:57:20.030846 3962 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:20.030990 master-0 kubenswrapper[3962]: E0308 21:57:20.030878 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.030861448 +0000 UTC m=+109.664133650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:20.030990 master-0 kubenswrapper[3962]: E0308 21:57:20.030910 3962 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:20.030990 master-0 kubenswrapper[3962]: E0308 21:57:20.030919 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.030900119 +0000 UTC m=+109.664172411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:20.030990 master-0 kubenswrapper[3962]: E0308 21:57:20.030968 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.03095185 +0000 UTC m=+109.664224042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:20.031262 master-0 kubenswrapper[3962]: E0308 21:57:20.031101 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:20.031262 master-0 kubenswrapper[3962]: E0308 21:57:20.031138 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.031046253 +0000 UTC m=+109.664318585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:20.031588 master-0 kubenswrapper[3962]: E0308 21:57:20.031550 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.031538325 +0000 UTC m=+109.664810627 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:20.032227 master-0 kubenswrapper[3962]: I0308 21:57:20.032205 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:20.032286 master-0 kubenswrapper[3962]: I0308 21:57:20.032270 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:20.032325 master-0 kubenswrapper[3962]: I0308 21:57:20.032306 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:20.032418 master-0 kubenswrapper[3962]: E0308 21:57:20.032404 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:20.032457 master-0 kubenswrapper[3962]: E0308 21:57:20.032445 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.032433399 +0000 UTC m=+109.665705611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:20.032524 master-0 kubenswrapper[3962]: E0308 21:57:20.032503 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:20.034283 master-0 kubenswrapper[3962]: E0308 21:57:20.034266 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.034249705 +0000 UTC m=+109.667521917 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:20.034334 master-0 kubenswrapper[3962]: E0308 21:57:20.032671 3962 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:20.034334 master-0 kubenswrapper[3962]: E0308 21:57:20.034321 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.034302987 +0000 UTC m=+109.667575199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:20.134113 master-0 kubenswrapper[3962]: I0308 21:57:20.133507 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:20.134113 master-0 kubenswrapper[3962]: E0308 21:57:20.133683 3962 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:20.134113 master-0 kubenswrapper[3962]: E0308 21:57:20.133755 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:22.133733381 +0000 UTC m=+109.767005583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:20.883994 master-0 kubenswrapper[3962]: E0308 21:57:20.883730 3962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" podUID="de89c423-0f2a-440f-9fa9-92fefea84b09" Mar 08 21:57:22.066143 master-0 kubenswrapper[3962]: I0308 21:57:22.065919 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:22.066143 master-0 kubenswrapper[3962]: I0308 21:57:22.066131 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: E0308 21:57:22.066174 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: E0308 21:57:22.066267 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.066246768 +0000 UTC m=+113.699518970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: E0308 21:57:22.066394 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: E0308 21:57:22.066549 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.066511345 +0000 UTC m=+113.699783587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: I0308 21:57:22.066408 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: E0308 21:57:22.066651 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:22.066722 master-0 kubenswrapper[3962]: I0308 21:57:22.066678 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:22.066903 master-0 kubenswrapper[3962]: E0308 21:57:22.066773 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.066734951 +0000 UTC m=+113.700007373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:22.066903 master-0 kubenswrapper[3962]: E0308 21:57:22.066819 3962 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:22.066903 master-0 kubenswrapper[3962]: I0308 21:57:22.066823 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:22.066903 master-0 kubenswrapper[3962]: E0308 21:57:22.066865 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.066850444 +0000 UTC m=+113.700122676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:22.067008 master-0 kubenswrapper[3962]: I0308 21:57:22.066954 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:22.067102 master-0 kubenswrapper[3962]: E0308 21:57:22.067047 3962 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068159 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.068145057 +0000 UTC m=+113.701417259 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: I0308 21:57:22.067213 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: I0308 21:57:22.068231 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: I0308 21:57:22.068256 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: I0308 21:57:22.068294 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068372 3962 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068394 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.068387094 +0000 UTC m=+113.701659296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.067159 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068424 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.068416844 +0000 UTC m=+113.701689046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068461 3962 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068479 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.068473386 +0000 UTC m=+113.701745828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068514 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.068531 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.068525897 +0000 UTC m=+113.701798089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:22.068922 master-0 kubenswrapper[3962]: E0308 21:57:22.067299 3962 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:22.069446 master-0 kubenswrapper[3962]: E0308 21:57:22.068557 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.068551298 +0000 UTC m=+113.701823500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:22.169315 master-0 kubenswrapper[3962]: I0308 21:57:22.169235 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:22.169544 master-0 kubenswrapper[3962]: E0308 21:57:22.169397 3962 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:22.169544 master-0 kubenswrapper[3962]: E0308 21:57:22.169502 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:26.169467411 +0000 UTC m=+113.802739613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:26.083617 master-0 kubenswrapper[3962]: I0308 21:57:26.083504 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:26.083617 master-0 kubenswrapper[3962]: I0308 21:57:26.083581 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.083830 3962 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: I0308 21:57:26.083907 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.083960 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.083928419 +0000 UTC m=+121.717200651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: I0308 21:57:26.084015 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084019 3962 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084152 3962 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084167 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084148395 +0000 UTC m=+121.717420817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: I0308 21:57:26.084099 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084092 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084205 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084190566 +0000 UTC m=+121.717462808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084254 3962 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084269 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084281 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084265998 +0000 UTC m=+121.717538240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: E0308 21:57:26.084303 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084292258 +0000 UTC m=+121.717564490 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:26.085208 master-0 kubenswrapper[3962]: I0308 21:57:26.084251 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084327 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084316059 +0000 UTC m=+121.717588301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: I0308 21:57:26.084392 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: I0308 21:57:26.084438 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: I0308 21:57:26.084470 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084523 3962 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084568 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084554665 +0000 UTC m=+121.717827077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084577 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084619 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084599396 +0000 UTC m=+121.717871598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084629 3962 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084660 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084652087 +0000 UTC m=+121.717924289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084663 3962 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: I0308 21:57:26.084525 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:26.085687 master-0 kubenswrapper[3962]: E0308 21:57:26.084688 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.084682778 +0000 UTC m=+121.717954980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:26.185846 master-0 kubenswrapper[3962]: I0308 21:57:26.185759 3962 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:26.186134 master-0 kubenswrapper[3962]: E0308 21:57:26.186101 3962 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:26.186189 master-0 kubenswrapper[3962]: E0308 21:57:26.186174 3962 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.186153386 +0000 UTC m=+121.819425588 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:29.246462 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 08 21:57:29.348350 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 08 21:57:29.348827 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 08 21:57:29.358433 master-0 systemd[1]: kubelet.service: Consumed 10.694s CPU time. Mar 08 21:57:29.481253 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 08 21:57:29.618578 master-0 kubenswrapper[7480]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:57:29.620335 master-0 kubenswrapper[7480]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 21:57:29.620418 master-0 kubenswrapper[7480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:57:29.620495 master-0 kubenswrapper[7480]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:57:29.620554 master-0 kubenswrapper[7480]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 21:57:29.620616 master-0 kubenswrapper[7480]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 21:57:29.621364 master-0 kubenswrapper[7480]: I0308 21:57:29.621031 7480 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 21:57:29.627300 master-0 kubenswrapper[7480]: W0308 21:57:29.627282 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:57:29.627419 master-0 kubenswrapper[7480]: W0308 21:57:29.627407 7480 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:57:29.627497 master-0 kubenswrapper[7480]: W0308 21:57:29.627486 7480 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:57:29.627560 master-0 kubenswrapper[7480]: W0308 21:57:29.627551 7480 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:57:29.627624 master-0 kubenswrapper[7480]: W0308 21:57:29.627614 7480 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:57:29.627689 master-0 kubenswrapper[7480]: W0308 21:57:29.627678 7480 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:57:29.627756 master-0 kubenswrapper[7480]: W0308 21:57:29.627744 7480 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:57:29.627812 master-0 kubenswrapper[7480]: W0308 21:57:29.627803 7480 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:57:29.627875 master-0 kubenswrapper[7480]: W0308 21:57:29.627866 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:57:29.627938 master-0 kubenswrapper[7480]: W0308 21:57:29.627927 7480 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:57:29.628033 master-0 kubenswrapper[7480]: W0308 21:57:29.628023 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:57:29.628108 master-0 kubenswrapper[7480]: W0308 21:57:29.628098 7480 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:57:29.628175 master-0 kubenswrapper[7480]: W0308 21:57:29.628165 7480 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:57:29.628236 master-0 kubenswrapper[7480]: W0308 21:57:29.628226 7480 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:57:29.628302 master-0 kubenswrapper[7480]: W0308 21:57:29.628292 7480 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:57:29.628358 master-0 kubenswrapper[7480]: W0308 21:57:29.628349 7480 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:57:29.628411 master-0 kubenswrapper[7480]: W0308 21:57:29.628403 7480 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:57:29.628467 master-0 kubenswrapper[7480]: W0308 21:57:29.628458 7480 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:57:29.628524 master-0 kubenswrapper[7480]: W0308 21:57:29.628515 7480 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:57:29.628586 master-0 kubenswrapper[7480]: W0308 21:57:29.628576 7480 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:57:29.628647 master-0 kubenswrapper[7480]: W0308 21:57:29.628638 7480 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:57:29.628780 master-0 kubenswrapper[7480]: W0308 21:57:29.628769 7480 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:57:29.628838 master-0 kubenswrapper[7480]: W0308 21:57:29.628830 7480 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:57:29.628893 master-0 kubenswrapper[7480]: W0308 21:57:29.628885 7480 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:57:29.628945 master-0 kubenswrapper[7480]: W0308 21:57:29.628937 7480 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:57:29.628997 master-0 kubenswrapper[7480]: W0308 21:57:29.628987 7480 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:57:29.629057 master-0 kubenswrapper[7480]: W0308 21:57:29.629048 7480 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:57:29.629157 master-0 kubenswrapper[7480]: W0308 21:57:29.629146 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:57:29.629226 master-0 kubenswrapper[7480]: W0308 21:57:29.629216 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:57:29.629397 master-0 kubenswrapper[7480]: W0308 21:57:29.629388 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:57:29.629454 master-0 kubenswrapper[7480]: W0308 21:57:29.629445 7480 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:57:29.629508 master-0 kubenswrapper[7480]: W0308 21:57:29.629500 7480 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:57:29.629567 master-0 kubenswrapper[7480]: W0308 21:57:29.629557 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:57:29.629629 master-0 kubenswrapper[7480]: W0308 21:57:29.629619 7480 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:57:29.629892 master-0 kubenswrapper[7480]: W0308 21:57:29.629882 7480 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:57:29.629965 master-0 kubenswrapper[7480]: W0308 21:57:29.629956 7480 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:57:29.630019 master-0 kubenswrapper[7480]: W0308 21:57:29.630011 7480 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:57:29.630086 master-0 kubenswrapper[7480]: W0308 21:57:29.630062 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:57:29.630153 master-0 kubenswrapper[7480]: W0308 21:57:29.630143 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:57:29.630208 master-0 kubenswrapper[7480]: W0308 21:57:29.630199 7480 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:57:29.630270 master-0 kubenswrapper[7480]: W0308 21:57:29.630260 7480 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:57:29.630337 master-0 kubenswrapper[7480]: W0308 21:57:29.630327 7480 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:57:29.630399 master-0 kubenswrapper[7480]: W0308 21:57:29.630390 7480 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:57:29.630461 master-0 kubenswrapper[7480]: W0308 21:57:29.630451 7480 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:57:29.630521 master-0 kubenswrapper[7480]: W0308 21:57:29.630513 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:57:29.630578 master-0 kubenswrapper[7480]: W0308 21:57:29.630569 7480 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:57:29.630638 master-0 kubenswrapper[7480]: W0308 21:57:29.630628 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:57:29.630698 master-0 kubenswrapper[7480]: W0308 21:57:29.630689 7480 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:57:29.630762 master-0 kubenswrapper[7480]: W0308 21:57:29.630753 7480 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:57:29.630820 master-0 kubenswrapper[7480]: W0308 21:57:29.630811 7480 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:57:29.630875 master-0 kubenswrapper[7480]: W0308 21:57:29.630866 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:57:29.630938 master-0 kubenswrapper[7480]: W0308 21:57:29.630928 7480 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:57:29.631004 master-0 kubenswrapper[7480]: W0308 21:57:29.630995 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:57:29.631065 master-0 kubenswrapper[7480]: W0308 21:57:29.631056 7480 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:57:29.631153 master-0 kubenswrapper[7480]: W0308 21:57:29.631142 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:57:29.631205 master-0 kubenswrapper[7480]: W0308 21:57:29.631197 7480 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:57:29.631259 master-0 kubenswrapper[7480]: W0308 21:57:29.631250 7480 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:57:29.631315 master-0 kubenswrapper[7480]: W0308 21:57:29.631306 7480 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:57:29.631372 master-0 kubenswrapper[7480]: W0308 21:57:29.631363 7480 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:57:29.631539 master-0 kubenswrapper[7480]: W0308 21:57:29.631428 7480 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:57:29.631635 master-0 kubenswrapper[7480]: W0308 21:57:29.631616 7480 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:57:29.631720 master-0 kubenswrapper[7480]: W0308 21:57:29.631710 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:57:29.631792 master-0 kubenswrapper[7480]: W0308 21:57:29.631782 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:57:29.631851 master-0 kubenswrapper[7480]: W0308 21:57:29.631842 7480 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:57:29.631911 master-0 kubenswrapper[7480]: W0308 21:57:29.631903 7480 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:57:29.631968 master-0 kubenswrapper[7480]: W0308 21:57:29.631959 7480 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:57:29.632022 master-0 kubenswrapper[7480]: W0308 21:57:29.632014 7480 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:57:29.632132 master-0 kubenswrapper[7480]: W0308 21:57:29.632067 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:57:29.632206 master-0 kubenswrapper[7480]: W0308 21:57:29.632196 7480 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:57:29.632271 master-0 kubenswrapper[7480]: W0308 21:57:29.632261 7480 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:57:29.632393 master-0 kubenswrapper[7480]: W0308 21:57:29.632382 7480 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:57:29.632459 master-0 kubenswrapper[7480]: W0308 21:57:29.632449 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:57:29.632656 master-0 kubenswrapper[7480]: I0308 21:57:29.632637 7480 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 21:57:29.632790 master-0 kubenswrapper[7480]: I0308 21:57:29.632769 7480 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 21:57:29.632865 master-0 kubenswrapper[7480]: I0308 21:57:29.632850 7480 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 21:57:29.632937 master-0 kubenswrapper[7480]: I0308 21:57:29.632923 7480 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 21:57:29.633000 master-0 kubenswrapper[7480]: I0308 21:57:29.632990 7480 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 21:57:29.633066 master-0 kubenswrapper[7480]: I0308 21:57:29.633052 7480 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 21:57:29.633158 master-0 kubenswrapper[7480]: I0308 21:57:29.633144 7480 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 21:57:29.633216 master-0 kubenswrapper[7480]: I0308 21:57:29.633207 7480 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 21:57:29.633268 master-0 kubenswrapper[7480]: I0308 21:57:29.633259 7480 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 21:57:29.633321 master-0 kubenswrapper[7480]: I0308 21:57:29.633312 7480 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 21:57:29.633376 master-0 kubenswrapper[7480]: I0308 21:57:29.633367 7480 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 21:57:29.633426 master-0 kubenswrapper[7480]: I0308 21:57:29.633418 7480 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 21:57:29.633473 master-0 kubenswrapper[7480]: I0308 21:57:29.633465 7480 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 21:57:29.633534 master-0 kubenswrapper[7480]: I0308 21:57:29.633525 7480 flags.go:64] FLAG: --cgroup-root="" Mar 08 21:57:29.633593 master-0 kubenswrapper[7480]: I0308 21:57:29.633583 7480 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 21:57:29.633767 master-0 kubenswrapper[7480]: I0308 21:57:29.633757 7480 flags.go:64] FLAG: --client-ca-file="" Mar 08 21:57:29.633904 master-0 kubenswrapper[7480]: I0308 21:57:29.633894 7480 flags.go:64] FLAG: --cloud-config="" Mar 08 21:57:29.634003 master-0 kubenswrapper[7480]: I0308 21:57:29.633993 7480 flags.go:64] FLAG: --cloud-provider="" Mar 08 21:57:29.634060 master-0 kubenswrapper[7480]: I0308 21:57:29.634048 7480 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 21:57:29.634140 master-0 kubenswrapper[7480]: I0308 21:57:29.634130 7480 flags.go:64] FLAG: --cluster-domain="" Mar 08 21:57:29.634199 master-0 kubenswrapper[7480]: I0308 21:57:29.634189 7480 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 21:57:29.634263 master-0 kubenswrapper[7480]: I0308 21:57:29.634253 7480 flags.go:64] FLAG: --config-dir="" Mar 08 21:57:29.634327 master-0 kubenswrapper[7480]: I0308 21:57:29.634315 7480 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 21:57:29.634390 master-0 kubenswrapper[7480]: I0308 21:57:29.634377 7480 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 21:57:29.634456 master-0 kubenswrapper[7480]: I0308 21:57:29.634445 7480 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 21:57:29.634538 master-0 kubenswrapper[7480]: I0308 21:57:29.634513 7480 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 21:57:29.634609 master-0 kubenswrapper[7480]: I0308 21:57:29.634597 7480 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 21:57:29.634666 master-0 kubenswrapper[7480]: I0308 21:57:29.634656 7480 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 21:57:29.634918 master-0 kubenswrapper[7480]: I0308 21:57:29.634908 7480 flags.go:64] FLAG: --contention-profiling="false" Mar 08 21:57:29.634976 master-0 kubenswrapper[7480]: I0308 21:57:29.634965 7480 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 21:57:29.635041 master-0 kubenswrapper[7480]: I0308 21:57:29.635030 7480 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 21:57:29.635123 master-0 kubenswrapper[7480]: I0308 21:57:29.635111 7480 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 21:57:29.635181 master-0 kubenswrapper[7480]: I0308 21:57:29.635170 7480 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 21:57:29.635236 master-0 kubenswrapper[7480]: I0308 21:57:29.635227 7480 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 21:57:29.635284 master-0 kubenswrapper[7480]: I0308 21:57:29.635276 7480 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 21:57:29.635331 master-0 kubenswrapper[7480]: I0308 21:57:29.635323 7480 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 21:57:29.635379 master-0 kubenswrapper[7480]: I0308 21:57:29.635370 7480 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 21:57:29.635425 master-0 kubenswrapper[7480]: I0308 21:57:29.635417 7480 flags.go:64] FLAG: --enable-server="true" Mar 08 21:57:29.635477 master-0 kubenswrapper[7480]: I0308 21:57:29.635464 7480 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 21:57:29.635528 master-0 kubenswrapper[7480]: I0308 21:57:29.635520 7480 flags.go:64] FLAG: --event-burst="100" Mar 08 21:57:29.635579 master-0 kubenswrapper[7480]: I0308 21:57:29.635570 7480 flags.go:64] FLAG: --event-qps="50" Mar 08 21:57:29.635625 master-0 kubenswrapper[7480]: I0308 21:57:29.635617 7480 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 21:57:29.635674 master-0 kubenswrapper[7480]: I0308 21:57:29.635665 7480 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 21:57:29.635722 master-0 kubenswrapper[7480]: I0308 21:57:29.635711 7480 flags.go:64] FLAG: --eviction-hard="" Mar 08 21:57:29.635767 master-0 kubenswrapper[7480]: I0308 21:57:29.635759 7480 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 21:57:29.635814 master-0 kubenswrapper[7480]: I0308 21:57:29.635805 7480 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 21:57:29.635857 master-0 kubenswrapper[7480]: I0308 21:57:29.635848 7480 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 21:57:29.635906 master-0 kubenswrapper[7480]: I0308 21:57:29.635898 7480 flags.go:64] FLAG: --eviction-soft="" Mar 08 21:57:29.635956 master-0 kubenswrapper[7480]: I0308 21:57:29.635948 7480 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 21:57:29.636008 master-0 kubenswrapper[7480]: I0308 21:57:29.635998 7480 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 21:57:29.636126 master-0 kubenswrapper[7480]: I0308 21:57:29.636063 7480 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 21:57:29.636204 master-0 kubenswrapper[7480]: I0308 21:57:29.636190 7480 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 21:57:29.636268 master-0 kubenswrapper[7480]: I0308 21:57:29.636257 7480 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 21:57:29.636327 master-0 kubenswrapper[7480]: I0308 21:57:29.636317 7480 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 21:57:29.636393 master-0 kubenswrapper[7480]: I0308 21:57:29.636380 7480 flags.go:64] FLAG: --feature-gates="" Mar 08 21:57:29.636454 master-0 kubenswrapper[7480]: I0308 21:57:29.636442 7480 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 21:57:29.636512 master-0 kubenswrapper[7480]: I0308 21:57:29.636501 7480 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 21:57:29.636577 master-0 kubenswrapper[7480]: I0308 21:57:29.636565 7480 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 21:57:29.636642 master-0 kubenswrapper[7480]: I0308 21:57:29.636631 7480 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 21:57:29.636722 master-0 kubenswrapper[7480]: I0308 21:57:29.636713 7480 flags.go:64] FLAG: --healthz-port="10248" Mar 08 21:57:29.637196 master-0 kubenswrapper[7480]: I0308 21:57:29.637180 7480 flags.go:64] FLAG: --help="false" Mar 08 21:57:29.637345 master-0 kubenswrapper[7480]: I0308 21:57:29.637333 7480 flags.go:64] FLAG: --hostname-override="" Mar 08 21:57:29.637424 master-0 kubenswrapper[7480]: I0308 21:57:29.637413 7480 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 21:57:29.637564 master-0 kubenswrapper[7480]: I0308 21:57:29.637554 7480 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 21:57:29.637617 master-0 kubenswrapper[7480]: I0308 21:57:29.637608 7480 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 21:57:29.637687 master-0 kubenswrapper[7480]: I0308 21:57:29.637678 7480 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 21:57:29.637735 master-0 kubenswrapper[7480]: I0308 21:57:29.637727 7480 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 21:57:29.637850 master-0 kubenswrapper[7480]: I0308 21:57:29.637784 7480 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 21:57:29.637923 master-0 kubenswrapper[7480]: I0308 21:57:29.637913 7480 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 21:57:29.637973 master-0 kubenswrapper[7480]: I0308 21:57:29.637964 7480 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 21:57:29.638056 master-0 kubenswrapper[7480]: I0308 21:57:29.638012 7480 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 21:57:29.638149 master-0 kubenswrapper[7480]: I0308 21:57:29.638135 7480 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 21:57:29.638214 master-0 kubenswrapper[7480]: I0308 21:57:29.638204 7480 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 21:57:29.638264 master-0 kubenswrapper[7480]: I0308 21:57:29.638256 7480 flags.go:64] FLAG: --kube-reserved="" Mar 08 21:57:29.638313 master-0 kubenswrapper[7480]: I0308 21:57:29.638305 7480 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 21:57:29.638362 master-0 kubenswrapper[7480]: I0308 21:57:29.638353 7480 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 21:57:29.638412 master-0 kubenswrapper[7480]: I0308 21:57:29.638404 7480 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 21:57:29.638472 master-0 kubenswrapper[7480]: I0308 21:57:29.638463 7480 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 21:57:29.638525 master-0 kubenswrapper[7480]: I0308 21:57:29.638516 7480 flags.go:64] FLAG: --lock-file="" Mar 08 21:57:29.638585 master-0 kubenswrapper[7480]: I0308 21:57:29.638575 7480 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 21:57:29.638652 master-0 kubenswrapper[7480]: I0308 21:57:29.638641 7480 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 21:57:29.638831 master-0 kubenswrapper[7480]: I0308 21:57:29.638815 7480 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 21:57:29.638885 master-0 kubenswrapper[7480]: I0308 21:57:29.638876 7480 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 21:57:29.638951 master-0 kubenswrapper[7480]: I0308 21:57:29.638940 7480 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 21:57:29.639005 master-0 kubenswrapper[7480]: I0308 21:57:29.638996 7480 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 21:57:29.639056 master-0 kubenswrapper[7480]: I0308 21:57:29.639046 7480 flags.go:64] FLAG: --logging-format="text" Mar 08 21:57:29.639136 master-0 kubenswrapper[7480]: I0308 21:57:29.639125 7480 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 21:57:29.639184 master-0 kubenswrapper[7480]: I0308 21:57:29.639176 7480 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 21:57:29.639235 master-0 kubenswrapper[7480]: I0308 21:57:29.639227 7480 flags.go:64] FLAG: --manifest-url="" Mar 08 21:57:29.639297 master-0 kubenswrapper[7480]: I0308 21:57:29.639284 7480 flags.go:64] FLAG: --manifest-url-header="" Mar 08 21:57:29.639364 master-0 kubenswrapper[7480]: I0308 21:57:29.639352 7480 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 21:57:29.639498 master-0 kubenswrapper[7480]: I0308 21:57:29.639483 7480 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 21:57:29.639568 master-0 kubenswrapper[7480]: I0308 21:57:29.639557 7480 flags.go:64] FLAG: --max-pods="110" Mar 08 21:57:29.639775 master-0 kubenswrapper[7480]: I0308 21:57:29.639761 7480 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 21:57:29.639863 master-0 kubenswrapper[7480]: I0308 21:57:29.639851 7480 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 21:57:29.640051 master-0 kubenswrapper[7480]: I0308 21:57:29.640040 7480 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 21:57:29.640132 master-0 kubenswrapper[7480]: I0308 21:57:29.640119 7480 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 21:57:29.640201 master-0 kubenswrapper[7480]: I0308 21:57:29.640191 7480 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 21:57:29.640260 master-0 kubenswrapper[7480]: I0308 21:57:29.640251 7480 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 08 21:57:29.640316 master-0 kubenswrapper[7480]: I0308 21:57:29.640297 7480 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 21:57:29.640372 master-0 kubenswrapper[7480]: I0308 21:57:29.640362 7480 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 21:57:29.640440 master-0 kubenswrapper[7480]: I0308 21:57:29.640429 7480 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 21:57:29.640504 master-0 kubenswrapper[7480]: I0308 21:57:29.640495 7480 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 21:57:29.640560 master-0 kubenswrapper[7480]: I0308 21:57:29.640551 7480 flags.go:64] FLAG: --pod-cidr="" Mar 08 21:57:29.640624 master-0 kubenswrapper[7480]: I0308 21:57:29.640609 7480 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 08 21:57:29.640701 master-0 kubenswrapper[7480]: I0308 21:57:29.640691 7480 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 21:57:29.640759 master-0 kubenswrapper[7480]: I0308 21:57:29.640750 7480 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 21:57:29.640805 master-0 kubenswrapper[7480]: I0308 21:57:29.640797 7480 flags.go:64] FLAG: --pods-per-core="0" Mar 08 21:57:29.640859 master-0 kubenswrapper[7480]: I0308 21:57:29.640850 7480 flags.go:64] FLAG: --port="10250" Mar 08 21:57:29.640906 master-0 kubenswrapper[7480]: I0308 21:57:29.640898 7480 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 21:57:29.640953 master-0 kubenswrapper[7480]: I0308 21:57:29.640945 7480 flags.go:64] FLAG: --provider-id="" Mar 08 21:57:29.641000 master-0 kubenswrapper[7480]: I0308 21:57:29.640992 7480 flags.go:64] FLAG: --qos-reserved="" Mar 08 21:57:29.641043 master-0 kubenswrapper[7480]: I0308 21:57:29.641036 7480 flags.go:64] FLAG: --read-only-port="10255" Mar 08 21:57:29.641141 master-0 kubenswrapper[7480]: I0308 21:57:29.641131 7480 flags.go:64] FLAG: --register-node="true" Mar 08 21:57:29.641211 master-0 kubenswrapper[7480]: I0308 21:57:29.641200 7480 flags.go:64] FLAG: --register-schedulable="true" Mar 08 21:57:29.641268 master-0 kubenswrapper[7480]: I0308 21:57:29.641254 7480 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 21:57:29.641313 master-0 kubenswrapper[7480]: I0308 21:57:29.641305 7480 flags.go:64] FLAG: --registry-burst="10" Mar 08 21:57:29.641360 master-0 kubenswrapper[7480]: I0308 21:57:29.641352 7480 flags.go:64] FLAG: --registry-qps="5" Mar 08 21:57:29.641411 master-0 kubenswrapper[7480]: I0308 21:57:29.641402 7480 flags.go:64] FLAG: --reserved-cpus="" Mar 08 21:57:29.641468 master-0 kubenswrapper[7480]: I0308 21:57:29.641457 7480 flags.go:64] FLAG: --reserved-memory="" Mar 08 21:57:29.641520 master-0 kubenswrapper[7480]: I0308 21:57:29.641510 7480 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 21:57:29.641573 master-0 kubenswrapper[7480]: I0308 21:57:29.641564 7480 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 21:57:29.641626 master-0 kubenswrapper[7480]: I0308 21:57:29.641617 7480 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 21:57:29.641758 master-0 kubenswrapper[7480]: I0308 21:57:29.641748 7480 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 21:57:29.641809 master-0 kubenswrapper[7480]: I0308 21:57:29.641800 7480 flags.go:64] FLAG: --runonce="false" Mar 08 21:57:29.641856 master-0 kubenswrapper[7480]: I0308 21:57:29.641848 7480 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 21:57:29.641905 master-0 kubenswrapper[7480]: I0308 21:57:29.641896 7480 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 21:57:29.641954 master-0 kubenswrapper[7480]: I0308 21:57:29.641945 7480 flags.go:64] FLAG: --seccomp-default="false" Mar 08 21:57:29.641999 master-0 kubenswrapper[7480]: I0308 21:57:29.641992 7480 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 21:57:29.642046 master-0 kubenswrapper[7480]: I0308 21:57:29.642038 7480 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 21:57:29.642128 master-0 kubenswrapper[7480]: I0308 21:57:29.642118 7480 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 21:57:29.642188 master-0 kubenswrapper[7480]: I0308 21:57:29.642179 7480 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 21:57:29.642236 master-0 kubenswrapper[7480]: I0308 21:57:29.642227 7480 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 21:57:29.642289 master-0 kubenswrapper[7480]: I0308 21:57:29.642280 7480 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 21:57:29.642339 master-0 kubenswrapper[7480]: I0308 21:57:29.642331 7480 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 21:57:29.642402 master-0 kubenswrapper[7480]: I0308 21:57:29.642390 7480 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642447 7480 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642455 7480 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642460 7480 flags.go:64] FLAG: --system-cgroups="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642465 7480 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642475 7480 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642480 7480 flags.go:64] FLAG: --tls-cert-file="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642484 7480 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642490 7480 flags.go:64] FLAG: --tls-min-version="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642495 7480 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642500 7480 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642505 7480 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642510 7480 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642515 7480 flags.go:64] FLAG: --v="2" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642523 7480 flags.go:64] FLAG: --version="false" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642530 7480 flags.go:64] FLAG: --vmodule="" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642536 7480 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: I0308 21:57:29.642541 7480 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642668 7480 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642673 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642678 7480 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642683 7480 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642687 7480 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642708 7480 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:57:29.643617 master-0 kubenswrapper[7480]: W0308 21:57:29.642714 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642718 7480 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642724 7480 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642730 7480 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642735 7480 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642739 7480 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642744 7480 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642750 7480 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642755 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642760 7480 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642766 7480 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642771 7480 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642776 7480 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642780 7480 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642785 7480 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642789 7480 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642793 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642799 7480 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642803 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:57:29.644198 master-0 kubenswrapper[7480]: W0308 21:57:29.642808 7480 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642813 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642817 7480 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642822 7480 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642826 7480 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642830 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642834 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642838 7480 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642843 7480 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642847 7480 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642851 7480 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642855 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642860 7480 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642864 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642868 7480 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642872 7480 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642876 7480 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642880 7480 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642884 7480 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642889 7480 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:57:29.644622 master-0 kubenswrapper[7480]: W0308 21:57:29.642893 7480 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642898 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642902 7480 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642910 7480 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642915 7480 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642920 7480 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642925 7480 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642930 7480 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642935 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642939 7480 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642944 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642948 7480 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642952 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642957 7480 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642962 7480 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642966 7480 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642971 7480 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642975 7480 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642979 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642983 7480 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:57:29.645167 master-0 kubenswrapper[7480]: W0308 21:57:29.642987 7480 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: W0308 21:57:29.642992 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: W0308 21:57:29.642996 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: W0308 21:57:29.643001 7480 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: W0308 21:57:29.643007 7480 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: W0308 21:57:29.643012 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: W0308 21:57:29.643016 7480 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:57:29.645605 master-0 kubenswrapper[7480]: I0308 21:57:29.643025 7480 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 21:57:29.652047 master-0 kubenswrapper[7480]: I0308 21:57:29.651975 7480 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 08 21:57:29.652047 master-0 kubenswrapper[7480]: I0308 21:57:29.652039 7480 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 21:57:29.652206 master-0 kubenswrapper[7480]: W0308 21:57:29.652174 7480 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:57:29.652206 master-0 kubenswrapper[7480]: W0308 21:57:29.652198 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:57:29.652206 master-0 kubenswrapper[7480]: W0308 21:57:29.652204 7480 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:57:29.652206 master-0 kubenswrapper[7480]: W0308 21:57:29.652212 7480 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:57:29.652661 master-0 kubenswrapper[7480]: W0308 21:57:29.652554 7480 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:57:29.652661 master-0 kubenswrapper[7480]: W0308 21:57:29.652649 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:57:29.652661 master-0 kubenswrapper[7480]: W0308 21:57:29.652656 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:57:29.652661 master-0 kubenswrapper[7480]: W0308 21:57:29.652662 7480 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652671 7480 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652680 7480 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652687 7480 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652720 7480 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652726 7480 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652732 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652738 7480 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:57:29.652798 master-0 kubenswrapper[7480]: W0308 21:57:29.652786 7480 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652873 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652882 7480 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652888 7480 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652893 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652898 7480 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652903 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652908 7480 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652914 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652919 7480 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652925 7480 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652930 7480 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652935 7480 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652940 7480 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652950 7480 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652955 7480 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652961 7480 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652969 7480 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652976 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652981 7480 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:57:29.652986 master-0 kubenswrapper[7480]: W0308 21:57:29.652987 7480 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.652993 7480 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653000 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653006 7480 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653013 7480 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653020 7480 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653030 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653037 7480 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653044 7480 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653050 7480 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653056 7480 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653061 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653067 7480 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653088 7480 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653094 7480 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653100 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653142 7480 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653147 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653159 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:57:29.654043 master-0 kubenswrapper[7480]: W0308 21:57:29.653164 7480 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653170 7480 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653177 7480 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653184 7480 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653190 7480 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653196 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653202 7480 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653209 7480 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653214 7480 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653220 7480 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653225 7480 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653231 7480 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653239 7480 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653245 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653250 7480 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653256 7480 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653262 7480 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:57:29.654599 master-0 kubenswrapper[7480]: W0308 21:57:29.653267 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: I0308 21:57:29.653278 7480 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653802 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653816 7480 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653822 7480 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653827 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653833 7480 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653838 7480 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653844 7480 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653849 7480 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653855 7480 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653860 7480 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653866 7480 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653871 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 21:57:29.655025 master-0 kubenswrapper[7480]: W0308 21:57:29.653881 7480 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653887 7480 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653927 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653933 7480 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653938 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653943 7480 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653948 7480 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653954 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653959 7480 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653971 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653976 7480 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653981 7480 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653986 7480 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.653995 7480 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654000 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654005 7480 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654011 7480 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654016 7480 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654021 7480 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654026 7480 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 21:57:29.655460 master-0 kubenswrapper[7480]: W0308 21:57:29.654031 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654037 7480 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654042 7480 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654047 7480 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654054 7480 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654059 7480 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654086 7480 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654092 7480 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654097 7480 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654102 7480 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654107 7480 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654112 7480 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654117 7480 feature_gate.go:330] unrecognized feature gate: Example Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654123 7480 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654131 7480 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654158 7480 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654165 7480 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654172 7480 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654213 7480 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 21:57:29.655912 master-0 kubenswrapper[7480]: W0308 21:57:29.654221 7480 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654227 7480 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654236 7480 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654243 7480 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654248 7480 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654254 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654259 7480 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654264 7480 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654269 7480 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654275 7480 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654280 7480 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654289 7480 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654294 7480 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654300 7480 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654305 7480 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654311 7480 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654317 7480 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654324 7480 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654329 7480 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654335 7480 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 21:57:29.656369 master-0 kubenswrapper[7480]: W0308 21:57:29.654340 7480 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 21:57:29.656835 master-0 kubenswrapper[7480]: I0308 21:57:29.654351 7480 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 21:57:29.656835 master-0 kubenswrapper[7480]: I0308 21:57:29.654741 7480 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 21:57:29.658873 master-0 kubenswrapper[7480]: I0308 21:57:29.658827 7480 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 08 21:57:29.659121 master-0 kubenswrapper[7480]: I0308 21:57:29.659040 7480 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 08 21:57:29.659493 master-0 kubenswrapper[7480]: I0308 21:57:29.659457 7480 server.go:997] "Starting client certificate rotation" Mar 08 21:57:29.659493 master-0 kubenswrapper[7480]: I0308 21:57:29.659485 7480 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 21:57:29.659888 master-0 kubenswrapper[7480]: I0308 21:57:29.659710 7480 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 14:51:41.792802046 +0000 UTC Mar 08 21:57:29.659888 master-0 kubenswrapper[7480]: I0308 21:57:29.659880 7480 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h54m12.132926617s for next certificate rotation Mar 08 21:57:29.660621 master-0 kubenswrapper[7480]: I0308 21:57:29.660587 7480 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 21:57:29.662598 master-0 kubenswrapper[7480]: I0308 21:57:29.662566 7480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 21:57:29.667861 master-0 kubenswrapper[7480]: I0308 21:57:29.667745 7480 log.go:25] "Validated CRI v1 runtime API" Mar 08 21:57:29.672164 master-0 kubenswrapper[7480]: I0308 21:57:29.672125 7480 log.go:25] "Validated CRI v1 image API" Mar 08 21:57:29.673859 master-0 kubenswrapper[7480]: I0308 21:57:29.673768 7480 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 21:57:29.679702 master-0 kubenswrapper[7480]: I0308 21:57:29.679634 7480 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f06a6435-a0b4-459f-8b49-c9a78e9e4f0c:/dev/vda3] Mar 08 21:57:29.680191 master-0 kubenswrapper[7480]: I0308 21:57:29.679698 7480 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d/userdata/shm major:0 minor:98 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~projected/kube-api-access major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~projected/kube-api-access-cpxls:{mountpoint:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~projected/kube-api-access-cpxls major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~projected/kube-api-access-7tlmx:{mountpoint:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~projected/kube-api-access-7tlmx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~projected/kube-api-access-pcqnj:{mountpoint:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~projected/kube-api-access-pcqnj major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1dfc8afd-2330-46a4-ae5b-36522102b332/volumes/kubernetes.io~projected/kube-api-access-jtbpk:{mountpoint:/var/lib/kubelet/pods/1dfc8afd-2330-46a4-ae5b-36522102b332/volumes/kubernetes.io~projected/kube-api-access-jtbpk major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~projected/kube-api-access-2l47w:{mountpoint:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~projected/kube-api-access-2l47w major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37bf82cb-adea-46d3-a899-136eb1d1f292/volumes/kubernetes.io~projected/kube-api-access-v6ht7:{mountpoint:/var/lib/kubelet/pods/37bf82cb-adea-46d3-a899-136eb1d1f292/volumes/kubernetes.io~projected/kube-api-access-v6ht7 major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/385e69e4-d443-44bb-8ee4-578a1c902c62/volumes/kubernetes.io~projected/kube-api-access-vxg7t:{mountpoint:/var/lib/kubelet/pods/385e69e4-d443-44bb-8ee4-578a1c902c62/volumes/kubernetes.io~projected/kube-api-access-vxg7t major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~projected/kube-api-access-ff6pm:{mountpoint:/var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~projected/kube-api-access-ff6pm major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~projected/kube-api-access-96gl4:{mountpoint:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~projected/kube-api-access-96gl4 major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~projected/kube-api-access-2hstt:{mountpoint:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~projected/kube-api-access-2hstt major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/etcd-client major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~projected/kube-api-access-zl4xt:{mountpoint:/var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~projected/kube-api-access-zl4xt major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~projected/kube-api-access-qzlpq:{mountpoint:/var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~projected/kube-api-access-qzlpq major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~projected/kube-api-access-jjt52:{mountpoint:/var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~projected/kube-api-access-jjt52 major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~projected/kube-api-access-5pwq4:{mountpoint:/var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~projected/kube-api-access-5pwq4 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/kube-api-access-vwdhp:{mountpoint:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/kube-api-access-vwdhp major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96a67acb-9cc6-4793-b99a-01479b239d76/volumes/kubernetes.io~projected/kube-api-access-d9xj9:{mountpoint:/var/lib/kubelet/pods/96a67acb-9cc6-4793-b99a-01479b239d76/volumes/kubernetes.io~projected/kube-api-access-d9xj9 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~projected/kube-api-access-7z7fx:{mountpoint:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~projected/kube-api-access-7z7fx major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~projected/kube-api-access-7xcbb:{mountpoint:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~projected/kube-api-access-7xcbb major:0 minor:95 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~projected/kube-api-access-gwqqw:{mountpoint:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~projected/kube-api-access-gwqqw major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/kube-api-access-drcp8:{mountpoint:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/kube-api-access-drcp8 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b358dcb7-d01f-4206-b636-b55a599a73bd/volumes/kubernetes.io~projected/kube-api-access-bmdmr:{mountpoint:/var/lib/kubelet/pods/b358dcb7-d01f-4206-b636-b55a599a73bd/volumes/kubernetes.io~projected/kube-api-access-bmdmr major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~projected/kube-api-access major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~projected/kube-api-access-tv57k:{mountpoint:/var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~projected/kube-api-access-tv57k major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~projected/kube-api-access-784c7:{mountpoint:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~projected/kube-api-access-784c7 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d287e2ca-f134-4e34-96f7-50a3055ee119/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d287e2ca-f134-4e34-96f7-50a3055ee119/volumes/kubernetes.io~projected/kube-api-access major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~projected/kube-api-access-ngf2z:{mountpoint:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~projected/kube-api-access-ngf2z major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~projected/kube-api-access-7h4vv:{mountpoint:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~projected/kube-api-access-7h4vv major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~projected/kube-api-access-4dr4p:{mountpoint:/var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~projected/kube-api-access-4dr4p major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~projected/kube-api-access-j9c64:{mountpoint:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~projected/kube-api-access-j9c64 major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~projected/kube-api-access major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/4b9ef557e4c58fb7270c28558229e246777c7270722dd6d61328efea31d3bf3e/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/f955515d0892db748d3afda3e6d6141a6fe0d2c9f21dd890521b56021d2fbab4/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/fb44671502dcff5d2f3e251b5daaf8ccd5d187f2efd159b9b5a5650e400ca376/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/9ae1c55f2496c6ceff5723391c838674fba7f4b7090ecb86d435572168fac9f9/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/0ca4ad78ef77c9876e9ba7d4f53af73e12dc4f6a33819e18b6fa3b342ae84964/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/94a10454272b1374bad59efb8e08071378d3ae1cfaff5f38397c6e8d00e23d2b/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/c9dc2349eab29437f1d190c4e406e8ac1fb58cc9f8a4d0d827396ef50fdc0543/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/30a16998d429daeee925269ae652616a9286fd67162e29bcf52d3591b3a919ec/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/a609cbf49b05793501334970b634181731726727a2be8ac40e840fa381fde4a4/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/483ca6331dfceeca3c53378710fc64c3ae066d032338fe7e2583f4f5dc30d56d/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/a35258cfdee261463788e0d8158218940fcfdbddd4b3a8a9ac69e8688c01b1aa/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/8877d620dc1d9e13384c64908317c953ba6535d946fa39e4d78f2866348db700/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/0370b25f6c02e55181a037c8c23798d3570bf687d9826f02566d4c4fcf785e0b/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/576a0f11e9341b1bc0622cf8acf04a7a20e189027aa23151aa9f164daf0fbe5b/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/3d89b949d97e8f8a76c26a35724e055ed20417d01f007e60d589b2a478d468e7/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/8792969b3f4c88f7923b88123140f3e458eaedc78a146ad6ba1be2364e5ad78c/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/2a518e72fa4a3c5d6699da16662bef60401b0f420e71a058b17441be9bc7acf9/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/76bc22d5d9e7495282be80f4f18b82d67f8acf877292fe52cddf77190f536890/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/f1d960b1e2deb9b29be0d38b177e6ca8100005c9f5c95802b689a7bb842e7989/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/3de24a697ad82e9f43dbb171323df53e5d2166569d8c9963d51a509832ad7955/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/7252e34a64d1355c90abc39f6e1cafe9c269d2b2c86e50a831a2b6026a1babb6/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/0e972913ba2ce3b62f4fd2363580f19c71de5636991519a0e86e160f4e592110/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/7b2dfeb16758221bee63cf6ec35b4c966cc29a93a3b1180716438ed3bd48a829/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/36725910f64ef7e4cc68138aa0cb6bf3441b893d0921d893a70b8f66edc8e70c/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/5dd09a6ae86e298977b5cf52d241380a6376cb00b618386175c37fdc9785b772/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/0546f88f594fe7399a103ff825f9838c0d797481d58a74579fc98afac0617e86/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/a02bd99cb03a0c5e9cc1c9636a5837ff64d060d5a0ca65b8d6dbe02cdf131685/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/2e3a36452524cda1396d311f3234425b1c501cca8e93a7281714792a03e4e911/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/5b404395730372dc8eee0f86ee27722c0b1f983789c5c3f8f82adff3c7f4b7f6/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/3b388c0cfc629303cf7b89af2b5de30681d039acfe0c2e900c55c34803f7bed9/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/c33692609526097a80e1d5ebc9f8e46f0e295a8a3eae9c2cb19f26a75a4a8425/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/819b8f52ca1d933aa06e8eba07158c804e060f60418ec93b546bbf259cb758f9/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/097fe0d30b1afefa04a4c1be42cbdf01a3dcb70f68608a39b6d51b76fc3a78ed/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/e18b5c92bd9a98e649424f68f6809f0fb5a6a7be3827706042e3208ed8da5e78/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/7b22ae356ce0fc905cdf8afe0447686cce8baf065a997dfa7fddfd54f3ead2ba/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/97a30e6abe8de06eea0591109b82163abf7f1344e38558bd1576a674921df061/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/b79d1cd17767b60bc8c6c2f23a0b0948e3cb44de45d0d7b0e503723011ea1890/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/b6fd257c966c919fd4a491fe0f92dfaf9a5d3aef681440644b9b89c839e31c5f/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/852f1de59b8abfe125866fd95e038b168ab208e454031baba75b2d1529e6f198/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/ed9108faefca2218650d35ebdc7a4a14c7721b1bc0f8a7b3fa7ecc8ca1ee1172/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/84d8137c1dd30fd57040e14e96e65dc6d58a373435e5a8794f800892862f7226/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/8984b06fef8ab40aecd606fb0427dd6fe4fac838718ce87a9842977a3b2502ae/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/bde98688154faecd6a47532168acb541a4826fbc6ffea15139a3b5f67bbdaece/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/a1c3ceb2ffaf850b3d83c35889ea49f04edfbd75e46e35d5ef894226f47e62b2/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/fcb46b7887f47983b87cca2b3504eb8f33a41e1812548e7de5acfdab220ef5d5/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/560ab04550a2fe4f1aa1edd6ab0c3e261475eb5fa30fcb5794f945e882375487/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/19a27d735cf9938ca417cac3e034b7b7a7d535effcbc39e8aac2fa72e996642a/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/d68eced4bd1b2392f00803c6d2f91461b22c8776c71b4d57e32379b33b669ac1/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/207332ee743a60ce32b960ff4e5cc5656d3162f2a586d6d83715725e4febf570/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/4e6b4c44fa9d54c25ab72d994ace3de7037888d04bec3e081eb859653be13b48/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/22abc55a9b792bb64ca625bea6f9a151538bf53a25313d23e789033090bae79a/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/d4ddf2a5b25e58f4f1f4158511d5f352e920bd8f769efccdaad6073161a9eb4c/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/aac15df24f3bc9a25a77ee649cab9625bc40284bcbf4f0e4397d13370d5a5550/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/d04e37b611d265d735fc0b007edfe299918efceb2cf63b11026557044ad4bc9e/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/08b4a83dfd4593093ca0d8826d8c14baabd36c2820d04e75b2c1014633bfc06f/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/6075f2d8c8d847c3a588bc0915196eb05e6f89220cdd5e25c25738776c5aab43/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/c16807afe41d99ba8d868c2a8d249759f1ea3dd35d32df695d3e8ff7b9fe272c/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/df328d8b95b94c6fc0efd101eeb6c8f01664cd3785594f3ec38d795a03bd7b30/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/366b1084f1ba06b8a63a17426bf6588ccf1a590eafd1732d7a66f2a06e0194c0/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 08 21:57:29.709234 master-0 kubenswrapper[7480]: I0308 21:57:29.708427 7480 manager.go:217] Machine: {Timestamp:2026-03-08 21:57:29.707184939 +0000 UTC m=+0.160805581 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:60bd3117f077456eaef79571349311b3 SystemUUID:60bd3117-f077-456e-aef7-9571349311b3 BootID:6ad049a3-699b-4e1d-9b55-0bbdfa29d597 Filesystems:[{Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~projected/kube-api-access-pcqnj DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~projected/kube-api-access-7h4vv DeviceMajor:0 DeviceMinor:259 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:248 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b358dcb7-d01f-4206-b636-b55a599a73bd/volumes/kubernetes.io~projected/kube-api-access-bmdmr DeviceMajor:0 DeviceMinor:270 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~projected/kube-api-access-cpxls DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~projected/kube-api-access-5pwq4 DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~projected/kube-api-access-2l47w DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~projected/kube-api-access-7tlmx DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~projected/kube-api-access-7xcbb DeviceMajor:0 DeviceMinor:95 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~projected/kube-api-access-qzlpq DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~projected/kube-api-access-4dr4p DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/kube-api-access-drcp8 DeviceMajor:0 DeviceMinor:252 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d/userdata/shm DeviceMajor:0 DeviceMinor:98 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~projected/kube-api-access-ngf2z DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37bf82cb-adea-46d3-a899-136eb1d1f292/volumes/kubernetes.io~projected/kube-api-access-v6ht7 DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~projected/kube-api-access-gwqqw DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/kube-api-access-vwdhp DeviceMajor:0 DeviceMinor:255 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1dfc8afd-2330-46a4-ae5b-36522102b332/volumes/kubernetes.io~projected/kube-api-access-jtbpk DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~projected/kube-api-access-96gl4 DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~projected/kube-api-access-7z7fx DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~projected/kube-api-access-2hstt DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~projected/kube-api-access-ff6pm DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~projected/kube-api-access-784c7 DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/385e69e4-d443-44bb-8ee4-578a1c902c62/volumes/kubernetes.io~projected/kube-api-access-vxg7t DeviceMajor:0 DeviceMinor:105 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/96a67acb-9cc6-4793-b99a-01479b239d76/volumes/kubernetes.io~projected/kube-api-access-d9xj9 DeviceMajor:0 DeviceMinor:118 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~projected/kube-api-access-jjt52 DeviceMajor:0 DeviceMinor:267 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d287e2ca-f134-4e34-96f7-50a3055ee119/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~projected/kube-api-access-zl4xt DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~projected/kube-api-access-j9c64 DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~projected/kube-api-access-tv57k DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:362c3b514579828 MacAddress:52:b0:d2:16:a1:0c Speed:10000 Mtu:8900} {Name:39ad18e2cdc2213 MacAddress:26:ff:39:9f:b1:0d Speed:10000 Mtu:8900} {Name:427fdbe110b0876 MacAddress:22:0a:9d:d1:46:0d Speed:10000 Mtu:8900} {Name:44b935a06c24e92 MacAddress:66:9f:b2:60:92:6e Speed:10000 Mtu:8900} {Name:503b7b6ea77465c MacAddress:5a:69:ab:29:33:7b Speed:10000 Mtu:8900} {Name:60db7aa4fe5c30f MacAddress:ca:d9:b0:b6:c2:17 Speed:10000 Mtu:8900} {Name:6798958131d9b61 MacAddress:da:a6:8d:8a:0b:cd Speed:10000 Mtu:8900} {Name:6a34c2634ae54a6 MacAddress:be:e8:e1:6c:2d:df Speed:10000 Mtu:8900} {Name:b606b54eb942579 MacAddress:ea:65:36:f9:3d:34 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:72:f3:51:ba:cd:dc Speed:0 Mtu:8900} {Name:dc168342b2accc2 MacAddress:6e:99:fb:ce:03:21 Speed:10000 Mtu:8900} {Name:e1a74bb495c9d9a MacAddress:56:56:84:0a:d2:ff Speed:10000 Mtu:8900} {Name:e5a5d91cfd17574 MacAddress:ca:f4:18:8f:14:44 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:0e:40:5e Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:52:ad:85:17:24:3e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 21:57:29.709234 master-0 kubenswrapper[7480]: I0308 21:57:29.709210 7480 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 21:57:29.709720 master-0 kubenswrapper[7480]: I0308 21:57:29.709390 7480 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 21:57:29.709927 master-0 kubenswrapper[7480]: I0308 21:57:29.709892 7480 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 21:57:29.710175 master-0 kubenswrapper[7480]: I0308 21:57:29.710122 7480 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 21:57:29.710442 master-0 kubenswrapper[7480]: I0308 21:57:29.710167 7480 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 21:57:29.710509 master-0 kubenswrapper[7480]: I0308 21:57:29.710463 7480 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 21:57:29.710509 master-0 kubenswrapper[7480]: I0308 21:57:29.710478 7480 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 21:57:29.710509 master-0 kubenswrapper[7480]: I0308 21:57:29.710491 7480 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 21:57:29.710614 master-0 kubenswrapper[7480]: I0308 21:57:29.710526 7480 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 21:57:29.710826 master-0 kubenswrapper[7480]: I0308 21:57:29.710795 7480 state_mem.go:36] "Initialized new in-memory state store" Mar 08 21:57:29.711421 master-0 kubenswrapper[7480]: I0308 21:57:29.711391 7480 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 21:57:29.711530 master-0 kubenswrapper[7480]: I0308 21:57:29.711499 7480 kubelet.go:418] "Attempting to sync node with API server" Mar 08 21:57:29.711530 master-0 kubenswrapper[7480]: I0308 21:57:29.711525 7480 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 21:57:29.711627 master-0 kubenswrapper[7480]: I0308 21:57:29.711546 7480 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 21:57:29.711627 master-0 kubenswrapper[7480]: I0308 21:57:29.711565 7480 kubelet.go:324] "Adding apiserver pod source" Mar 08 21:57:29.711627 master-0 kubenswrapper[7480]: I0308 21:57:29.711591 7480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 21:57:29.714133 master-0 kubenswrapper[7480]: I0308 21:57:29.713376 7480 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 08 21:57:29.714133 master-0 kubenswrapper[7480]: I0308 21:57:29.713718 7480 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 08 21:57:29.714431 master-0 kubenswrapper[7480]: I0308 21:57:29.714339 7480 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714523 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714546 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714556 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714564 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714572 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714620 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714629 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714638 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714651 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 21:57:29.714655 master-0 kubenswrapper[7480]: I0308 21:57:29.714661 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 21:57:29.715056 master-0 kubenswrapper[7480]: I0308 21:57:29.714674 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 21:57:29.715056 master-0 kubenswrapper[7480]: I0308 21:57:29.714692 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 21:57:29.716178 master-0 kubenswrapper[7480]: I0308 21:57:29.716130 7480 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 21:57:29.716731 master-0 kubenswrapper[7480]: I0308 21:57:29.716690 7480 server.go:1280] "Started kubelet" Mar 08 21:57:29.718701 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 08 21:57:29.725604 master-0 kubenswrapper[7480]: I0308 21:57:29.718607 7480 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 21:57:29.725604 master-0 kubenswrapper[7480]: I0308 21:57:29.718771 7480 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 08 21:57:29.725604 master-0 kubenswrapper[7480]: I0308 21:57:29.718893 7480 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 21:57:29.725604 master-0 kubenswrapper[7480]: I0308 21:57:29.719578 7480 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 21:57:29.729616 master-0 kubenswrapper[7480]: I0308 21:57:29.729552 7480 server.go:449] "Adding debug handlers to kubelet server" Mar 08 21:57:29.729616 master-0 kubenswrapper[7480]: I0308 21:57:29.729563 7480 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 21:57:29.729751 master-0 kubenswrapper[7480]: I0308 21:57:29.729623 7480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 21:57:29.732172 master-0 kubenswrapper[7480]: I0308 21:57:29.730215 7480 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 15:30:16.750122226 +0000 UTC Mar 08 21:57:29.732172 master-0 kubenswrapper[7480]: I0308 21:57:29.730279 7480 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h32m47.019847487s for next certificate rotation Mar 08 21:57:29.735840 master-0 kubenswrapper[7480]: I0308 21:57:29.733394 7480 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 21:57:29.735840 master-0 kubenswrapper[7480]: I0308 21:57:29.733965 7480 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 21:57:29.737336 master-0 kubenswrapper[7480]: I0308 21:57:29.736925 7480 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 08 21:57:29.737450 master-0 kubenswrapper[7480]: I0308 21:57:29.737343 7480 factory.go:55] Registering systemd factory Mar 08 21:57:29.737450 master-0 kubenswrapper[7480]: I0308 21:57:29.737373 7480 factory.go:221] Registration of the systemd container factory successfully Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.737877 7480 factory.go:153] Registering CRI-O factory Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.737905 7480 factory.go:221] Registration of the crio container factory successfully Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.738041 7480 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.738118 7480 factory.go:103] Registering Raw factory Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: E0308 21:57:29.738131 7480 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.738144 7480 manager.go:1196] Started watching for new ooms in manager Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.739205 7480 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 21:57:29.739322 master-0 kubenswrapper[7480]: I0308 21:57:29.739241 7480 manager.go:319] Starting recovery of all containers Mar 08 21:57:29.740279 master-0 kubenswrapper[7480]: I0308 21:57:29.740230 7480 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 21:57:29.741185 master-0 kubenswrapper[7480]: I0308 21:57:29.741141 7480 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765395 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" volumeName="kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765802 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" volumeName="kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765820 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" volumeName="kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765835 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765850 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2851c096-f5cb-4a46-a5a0-ac0b1341033b" volumeName="kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765866 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83b5f0b6-adee-4820-8212-b4d182b178d2" volumeName="kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765879 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d01185-e485-4697-92c2-31a044f25d82" volumeName="kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765895 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765913 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765929 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2851c096-f5cb-4a46-a5a0-ac0b1341033b" volumeName="kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765943 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765958 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765972 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44e67e41-045e-42ef-8f60-6ef15606d6a2" volumeName="kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.765989 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de89c423-0f2a-440f-9fa9-92fefea84b09" volumeName="kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766004 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="385e69e4-d443-44bb-8ee4-578a1c902c62" volumeName="kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766019 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766032 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a21e2296-10cb-4c70-ac3e-2173d35faac4" volumeName="kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766046 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766062 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766093 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766124 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766140 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="385e69e4-d443-44bb-8ee4-578a1c902c62" volumeName="kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766154 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ef806a4-5486-43a9-8bfa-b1670c888dc1" volumeName="kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766169 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0641333-feda-44c5-baf5-ceee4ce3fd8f" volumeName="kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766184 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d287e2ca-f134-4e34-96f7-50a3055ee119" volumeName="kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766199 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df48e7e0-0659-48e2-9b6a-32c964ff47b2" volumeName="kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766216 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766241 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" volumeName="kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766256 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766281 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766295 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766311 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c50dd1f-fcbc-412c-a1cc-0738ea4464e0" volumeName="kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766326 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766342 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766356 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766371 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d01185-e485-4697-92c2-31a044f25d82" volumeName="kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766385 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de89c423-0f2a-440f-9fa9-92fefea84b09" volumeName="kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766399 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6fbc12f-3c27-4a7a-933f-43a55c960335" volumeName="kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766413 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" volumeName="kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766427 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1dfc8afd-2330-46a4-ae5b-36522102b332" volumeName="kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766441 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" volumeName="kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766456 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766469 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="971ffa86-4d52-4dc3-ba28-03d116ec3494" volumeName="kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766484 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8e00c74-fb72-4e3d-a22c-c38a4772a813" volumeName="kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766504 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b358dcb7-d01f-4206-b636-b55a599a73bd" volumeName="kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766520 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37bf82cb-adea-46d3-a899-136eb1d1f292" volumeName="kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766536 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766550 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766563 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766578 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0641333-feda-44c5-baf5-ceee4ce3fd8f" volumeName="kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766593 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766607 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="971ffa86-4d52-4dc3-ba28-03d116ec3494" volumeName="kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766625 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d01185-e485-4697-92c2-31a044f25d82" volumeName="kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766640 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766658 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766673 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766687 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0641333-feda-44c5-baf5-ceee4ce3fd8f" volumeName="kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766701 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6fbc12f-3c27-4a7a-933f-43a55c960335" volumeName="kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766736 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766752 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token" seLinuxMountContext="" Mar 08 21:57:29.768168 master-0 kubenswrapper[7480]: I0308 21:57:29.766768 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.774431 master-0 kubenswrapper[7480]: I0308 21:57:29.774204 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt" seLinuxMountContext="" Mar 08 21:57:29.774431 master-0 kubenswrapper[7480]: I0308 21:57:29.774362 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ef806a4-5486-43a9-8bfa-b1670c888dc1" volumeName="kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config" seLinuxMountContext="" Mar 08 21:57:29.774431 master-0 kubenswrapper[7480]: I0308 21:57:29.774383 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca" seLinuxMountContext="" Mar 08 21:57:29.774431 master-0 kubenswrapper[7480]: I0308 21:57:29.774403 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" volumeName="kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca" seLinuxMountContext="" Mar 08 21:57:29.774431 master-0 kubenswrapper[7480]: I0308 21:57:29.774422 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b849f992-1020-4633-98be-75705b962fa9" volumeName="kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access" seLinuxMountContext="" Mar 08 21:57:29.774431 master-0 kubenswrapper[7480]: I0308 21:57:29.774440 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b849f992-1020-4633-98be-75705b962fa9" volumeName="kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.774668 master-0 kubenswrapper[7480]: I0308 21:57:29.774459 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be431b74-1116-4b0f-8b25-bbb0408411b0" volumeName="kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k" seLinuxMountContext="" Mar 08 21:57:29.774668 master-0 kubenswrapper[7480]: I0308 21:57:29.774502 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls" seLinuxMountContext="" Mar 08 21:57:29.774668 master-0 kubenswrapper[7480]: I0308 21:57:29.774525 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle" seLinuxMountContext="" Mar 08 21:57:29.774668 master-0 kubenswrapper[7480]: I0308 21:57:29.774607 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="971ffa86-4d52-4dc3-ba28-03d116ec3494" volumeName="kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config" seLinuxMountContext="" Mar 08 21:57:29.774802 master-0 kubenswrapper[7480]: I0308 21:57:29.774678 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b849f992-1020-4633-98be-75705b962fa9" volumeName="kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config" seLinuxMountContext="" Mar 08 21:57:29.774867 master-0 kubenswrapper[7480]: I0308 21:57:29.774822 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64" seLinuxMountContext="" Mar 08 21:57:29.775025 master-0 kubenswrapper[7480]: I0308 21:57:29.774917 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" volumeName="kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx" seLinuxMountContext="" Mar 08 21:57:29.775386 master-0 kubenswrapper[7480]: I0308 21:57:29.775055 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a21e2296-10cb-4c70-ac3e-2173d35faac4" volumeName="kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls" seLinuxMountContext="" Mar 08 21:57:29.775461 master-0 kubenswrapper[7480]: I0308 21:57:29.775414 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8e00c74-fb72-4e3d-a22c-c38a4772a813" volumeName="kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config" seLinuxMountContext="" Mar 08 21:57:29.775514 master-0 kubenswrapper[7480]: I0308 21:57:29.775461 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b358dcb7-d01f-4206-b636-b55a599a73bd" volumeName="kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr" seLinuxMountContext="" Mar 08 21:57:29.775514 master-0 kubenswrapper[7480]: I0308 21:57:29.775486 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d287e2ca-f134-4e34-96f7-50a3055ee119" volumeName="kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access" seLinuxMountContext="" Mar 08 21:57:29.775591 master-0 kubenswrapper[7480]: I0308 21:57:29.775533 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides" seLinuxMountContext="" Mar 08 21:57:29.775681 master-0 kubenswrapper[7480]: I0308 21:57:29.775636 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="385e69e4-d443-44bb-8ee4-578a1c902c62" volumeName="kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t" seLinuxMountContext="" Mar 08 21:57:29.775745 master-0 kubenswrapper[7480]: I0308 21:57:29.775681 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8e00c74-fb72-4e3d-a22c-c38a4772a813" volumeName="kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw" seLinuxMountContext="" Mar 08 21:57:29.775797 master-0 kubenswrapper[7480]: I0308 21:57:29.775771 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de89c423-0f2a-440f-9fa9-92fefea84b09" volumeName="kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.775832 master-0 kubenswrapper[7480]: I0308 21:57:29.775796 7480 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6fbc12f-3c27-4a7a-933f-43a55c960335" volumeName="kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert" seLinuxMountContext="" Mar 08 21:57:29.775864 master-0 kubenswrapper[7480]: I0308 21:57:29.775803 7480 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 21:57:29.775908 master-0 kubenswrapper[7480]: I0308 21:57:29.775837 7480 reconstruct.go:97] "Volume reconstruction finished" Mar 08 21:57:29.775941 master-0 kubenswrapper[7480]: I0308 21:57:29.775915 7480 reconciler.go:26] "Reconciler: start to sync state" Mar 08 21:57:29.779685 master-0 kubenswrapper[7480]: I0308 21:57:29.779642 7480 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 21:57:29.779755 master-0 kubenswrapper[7480]: I0308 21:57:29.779694 7480 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 08 21:57:29.779755 master-0 kubenswrapper[7480]: I0308 21:57:29.779708 7480 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 21:57:29.779755 master-0 kubenswrapper[7480]: I0308 21:57:29.779756 7480 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 21:57:29.779876 master-0 kubenswrapper[7480]: E0308 21:57:29.779816 7480 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 08 21:57:29.781310 master-0 kubenswrapper[7480]: I0308 21:57:29.781263 7480 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 21:57:29.810205 master-0 kubenswrapper[7480]: I0308 21:57:29.810129 7480 generic.go:334] "Generic (PLEG): container finished" podID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerID="6fd82c9a243ac415559b6058cdd8b371086e0c724a6c0dd643229ce1967ee982" exitCode=0 Mar 08 21:57:29.819452 master-0 kubenswrapper[7480]: I0308 21:57:29.819381 7480 generic.go:334] "Generic (PLEG): container finished" podID="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" containerID="9c0dad4facbead9173c18e63c1454c1d466a90a1041e6859864e005008acb001" exitCode=0 Mar 08 21:57:29.834747 master-0 kubenswrapper[7480]: I0308 21:57:29.834672 7480 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="db4187056969875e15e546fde8b086c9df68d0dfd1ba3b2a7d33cdf8f2598f9a" exitCode=0 Mar 08 21:57:29.834747 master-0 kubenswrapper[7480]: I0308 21:57:29.834729 7480 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="ba570d5274abc3eff808a6feca603573aedab7307cfb102965df1c84daee657a" exitCode=0 Mar 08 21:57:29.834747 master-0 kubenswrapper[7480]: I0308 21:57:29.834741 7480 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="719c0f1133120f686febe97b7386aa26236fdb7648305df23056b3e40ec22875" exitCode=0 Mar 08 21:57:29.834747 master-0 kubenswrapper[7480]: I0308 21:57:29.834751 7480 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="332b44c02955cc191872da4d797a1cc566a290dcc3b5e3b8b9e49f2a86f283e8" exitCode=0 Mar 08 21:57:29.834747 master-0 kubenswrapper[7480]: I0308 21:57:29.834760 7480 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="8100187bff84fd39b1869b62c92c77062e916e1f9e3462572f5572d1caef3b83" exitCode=0 Mar 08 21:57:29.834747 master-0 kubenswrapper[7480]: I0308 21:57:29.834769 7480 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="1de5c137bbb7c8c06869f9101463a33e4cb94c8693913396854f5dedf16bf314" exitCode=0 Mar 08 21:57:29.840743 master-0 kubenswrapper[7480]: I0308 21:57:29.840690 7480 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="da776c7c3ffac41c9193152c13ad24a2c2d14135225b75898e7c53fb459df62b" exitCode=0 Mar 08 21:57:29.847934 master-0 kubenswrapper[7480]: I0308 21:57:29.847887 7480 generic.go:334] "Generic (PLEG): container finished" podID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerID="075540abc9ccd6697e1ff04ade4d337fce9916d26b47b35e3ef665f65e8db6d7" exitCode=0 Mar 08 21:57:29.851320 master-0 kubenswrapper[7480]: I0308 21:57:29.851275 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 21:57:29.851669 master-0 kubenswrapper[7480]: I0308 21:57:29.851629 7480 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b" exitCode=1 Mar 08 21:57:29.851761 master-0 kubenswrapper[7480]: I0308 21:57:29.851738 7480 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="19b1636ab72d9a9b9983713d62f8565fb7c16719c6345915ce9c3d89fbded136" exitCode=0 Mar 08 21:57:29.881264 master-0 kubenswrapper[7480]: E0308 21:57:29.881168 7480 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 21:57:29.904898 master-0 kubenswrapper[7480]: I0308 21:57:29.904868 7480 manager.go:324] Recovery completed Mar 08 21:57:29.938120 master-0 kubenswrapper[7480]: I0308 21:57:29.938038 7480 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 21:57:29.938120 master-0 kubenswrapper[7480]: I0308 21:57:29.938103 7480 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 21:57:29.938437 master-0 kubenswrapper[7480]: I0308 21:57:29.938136 7480 state_mem.go:36] "Initialized new in-memory state store" Mar 08 21:57:29.938812 master-0 kubenswrapper[7480]: I0308 21:57:29.938678 7480 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 08 21:57:29.938812 master-0 kubenswrapper[7480]: I0308 21:57:29.938724 7480 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 08 21:57:29.938812 master-0 kubenswrapper[7480]: I0308 21:57:29.938759 7480 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 08 21:57:29.938812 master-0 kubenswrapper[7480]: I0308 21:57:29.938771 7480 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 08 21:57:29.938812 master-0 kubenswrapper[7480]: I0308 21:57:29.938808 7480 policy_none.go:49] "None policy: Start" Mar 08 21:57:29.943590 master-0 kubenswrapper[7480]: I0308 21:57:29.943547 7480 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 21:57:29.943674 master-0 kubenswrapper[7480]: I0308 21:57:29.943610 7480 state_mem.go:35] "Initializing new in-memory state store" Mar 08 21:57:29.944017 master-0 kubenswrapper[7480]: I0308 21:57:29.943981 7480 state_mem.go:75] "Updated machine memory state" Mar 08 21:57:29.944110 master-0 kubenswrapper[7480]: I0308 21:57:29.944004 7480 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 08 21:57:29.956177 master-0 kubenswrapper[7480]: I0308 21:57:29.956152 7480 manager.go:334] "Starting Device Plugin manager" Mar 08 21:57:29.956342 master-0 kubenswrapper[7480]: I0308 21:57:29.956323 7480 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 21:57:29.956419 master-0 kubenswrapper[7480]: I0308 21:57:29.956406 7480 server.go:79] "Starting device plugin registration server" Mar 08 21:57:29.956987 master-0 kubenswrapper[7480]: I0308 21:57:29.956962 7480 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 21:57:29.957293 master-0 kubenswrapper[7480]: I0308 21:57:29.957134 7480 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 21:57:29.957628 master-0 kubenswrapper[7480]: I0308 21:57:29.957564 7480 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 21:57:29.957802 master-0 kubenswrapper[7480]: I0308 21:57:29.957743 7480 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 21:57:29.957802 master-0 kubenswrapper[7480]: I0308 21:57:29.957759 7480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 21:57:30.058769 master-0 kubenswrapper[7480]: I0308 21:57:30.058561 7480 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 21:57:30.061233 master-0 kubenswrapper[7480]: I0308 21:57:30.061203 7480 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 21:57:30.061296 master-0 kubenswrapper[7480]: I0308 21:57:30.061236 7480 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 21:57:30.061296 master-0 kubenswrapper[7480]: I0308 21:57:30.061246 7480 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 21:57:30.061296 master-0 kubenswrapper[7480]: I0308 21:57:30.061284 7480 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 21:57:30.075709 master-0 kubenswrapper[7480]: I0308 21:57:30.075665 7480 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 08 21:57:30.075939 master-0 kubenswrapper[7480]: I0308 21:57:30.075817 7480 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 08 21:57:30.081753 master-0 kubenswrapper[7480]: I0308 21:57:30.081650 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 08 21:57:30.083633 master-0 kubenswrapper[7480]: I0308 21:57:30.083551 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"ca95d22d6228d434ce4ed2f415b15a00e7effc076e30de148f0569774a6d01db"} Mar 08 21:57:30.083708 master-0 kubenswrapper[7480]: I0308 21:57:30.083641 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"eedb99ce5fd0482117fcb1e638ee1d23354e4695c591afb02611065662c5742f"} Mar 08 21:57:30.083708 master-0 kubenswrapper[7480]: I0308 21:57:30.083671 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"f50874fd44a38fe2052c0dd021aa5c5eab2b987367eeee5b46f35dae49f0f668"} Mar 08 21:57:30.083708 master-0 kubenswrapper[7480]: I0308 21:57:30.083683 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f"} Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083711 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f130b1d0e4df99a0135c201a74b309f0683706f393c93621bb731d2032758d" Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083729 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b660af69f2087dc4c60773633358d5b6c0baf9d89578945f2e2d8011d5c68e" Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083795 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"81880effd0e6f8229eefecfa74f76d169bbd4c02b4efe891a8b85181d0ccd2ca"} Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083813 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"8d8ef0d2f7570923c4fa1a9617292413de2da9937c525cc65b8fbe3433d3ca3e"} Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083827 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"da776c7c3ffac41c9193152c13ad24a2c2d14135225b75898e7c53fb459df62b"} Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083845 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c"} Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083861 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674"} Mar 08 21:57:30.083874 master-0 kubenswrapper[7480]: I0308 21:57:30.083874 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331"} Mar 08 21:57:30.084159 master-0 kubenswrapper[7480]: I0308 21:57:30.083887 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c33ac92fca6e80e326ddd9d0778e2a7dba8745d75895b03f171586f048347f52"} Mar 08 21:57:30.084159 master-0 kubenswrapper[7480]: I0308 21:57:30.083903 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034" Mar 08 21:57:30.084159 master-0 kubenswrapper[7480]: I0308 21:57:30.083917 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"141c1c193013aba156bcafd70b058b224242057d2cf9f83ba4dd26b8100e4d3f"} Mar 08 21:57:30.084159 master-0 kubenswrapper[7480]: I0308 21:57:30.084052 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b"} Mar 08 21:57:30.084303 master-0 kubenswrapper[7480]: I0308 21:57:30.084191 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"19b1636ab72d9a9b9983713d62f8565fb7c16719c6345915ce9c3d89fbded136"} Mar 08 21:57:30.084303 master-0 kubenswrapper[7480]: I0308 21:57:30.084230 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0"} Mar 08 21:57:30.103284 master-0 kubenswrapper[7480]: W0308 21:57:30.103200 7480 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 08 21:57:30.103538 master-0 kubenswrapper[7480]: E0308 21:57:30.103284 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.103538 master-0 kubenswrapper[7480]: E0308 21:57:30.103313 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.103538 master-0 kubenswrapper[7480]: E0308 21:57:30.103316 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.103538 master-0 kubenswrapper[7480]: E0308 21:57:30.103224 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.111498 master-0 kubenswrapper[7480]: E0308 21:57:30.111436 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.185541 master-0 kubenswrapper[7480]: I0308 21:57:30.185372 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.185541 master-0 kubenswrapper[7480]: I0308 21:57:30.185464 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.185541 master-0 kubenswrapper[7480]: I0308 21:57:30.185537 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185579 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185621 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185661 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185696 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185730 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185766 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185800 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185832 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185862 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185896 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185931 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185962 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.185996 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.186064 master-0 kubenswrapper[7480]: I0308 21:57:30.186030 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.286351 master-0 kubenswrapper[7480]: I0308 21:57:30.286283 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.286351 master-0 kubenswrapper[7480]: I0308 21:57:30.286337 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.286633 master-0 kubenswrapper[7480]: I0308 21:57:30.286461 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.286633 master-0 kubenswrapper[7480]: I0308 21:57:30.286578 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286739 master-0 kubenswrapper[7480]: I0308 21:57:30.286638 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286739 master-0 kubenswrapper[7480]: I0308 21:57:30.286666 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286739 master-0 kubenswrapper[7480]: I0308 21:57:30.286696 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286739 master-0 kubenswrapper[7480]: I0308 21:57:30.286593 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.286739 master-0 kubenswrapper[7480]: I0308 21:57:30.286713 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286739 master-0 kubenswrapper[7480]: I0308 21:57:30.286735 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.286954 master-0 kubenswrapper[7480]: I0308 21:57:30.286758 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.286954 master-0 kubenswrapper[7480]: I0308 21:57:30.286757 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286954 master-0 kubenswrapper[7480]: I0308 21:57:30.286792 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.286954 master-0 kubenswrapper[7480]: I0308 21:57:30.286835 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.286954 master-0 kubenswrapper[7480]: I0308 21:57:30.286860 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287171 master-0 kubenswrapper[7480]: I0308 21:57:30.286951 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.287171 master-0 kubenswrapper[7480]: I0308 21:57:30.287003 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287171 master-0 kubenswrapper[7480]: I0308 21:57:30.287117 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.287171 master-0 kubenswrapper[7480]: I0308 21:57:30.287049 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287171 master-0 kubenswrapper[7480]: I0308 21:57:30.287160 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.287360 master-0 kubenswrapper[7480]: I0308 21:57:30.287233 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.287360 master-0 kubenswrapper[7480]: I0308 21:57:30.287237 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 21:57:30.287360 master-0 kubenswrapper[7480]: I0308 21:57:30.287277 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.287360 master-0 kubenswrapper[7480]: I0308 21:57:30.287305 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.287360 master-0 kubenswrapper[7480]: I0308 21:57:30.287331 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287360 master-0 kubenswrapper[7480]: I0308 21:57:30.287362 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287382 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287437 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287454 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287475 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287500 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287514 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:30.287576 master-0 kubenswrapper[7480]: I0308 21:57:30.287514 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.287822 master-0 kubenswrapper[7480]: I0308 21:57:30.287610 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:30.712494 master-0 kubenswrapper[7480]: I0308 21:57:30.712362 7480 apiserver.go:52] "Watching apiserver" Mar 08 21:57:30.725663 master-0 kubenswrapper[7480]: I0308 21:57:30.725597 7480 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 21:57:30.727156 master-0 kubenswrapper[7480]: I0308 21:57:30.727034 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-kxkrl","openshift-multus/multus-additional-cni-plugins-74fmb","openshift-network-node-identity/network-node-identity-trhtl","kube-system/bootstrap-kube-scheduler-master-0","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x","openshift-dns-operator/dns-operator-589895fbb7-wtvp5","openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg","openshift-multus/network-metrics-daemon-lqdbv","openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw","openshift-network-operator/iptables-alerter-pwn9k","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf","openshift-ingress-operator/ingress-operator-677db989d6-cjdgr","openshift-multus/multus-admission-controller-8d675b596-ddw98","openshift-multus/multus-l8ltx","openshift-network-diagnostics/network-check-target-djlff","openshift-network-operator/network-operator-7c649bf6d4-znt8q","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k","openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg","openshift-config-operator/openshift-config-operator-64488f9d78-krpfs","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8","openshift-etcd/etcd-master-0-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh","openshift-ovn-kubernetes/ovnkube-node-g4d2r"] Mar 08 21:57:30.729257 master-0 kubenswrapper[7480]: I0308 21:57:30.727594 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:30.729257 master-0 kubenswrapper[7480]: I0308 21:57:30.729152 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:30.729257 master-0 kubenswrapper[7480]: I0308 21:57:30.727790 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:30.729257 master-0 kubenswrapper[7480]: I0308 21:57:30.728299 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 21:57:30.729257 master-0 kubenswrapper[7480]: I0308 21:57:30.727702 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.729741 master-0 kubenswrapper[7480]: I0308 21:57:30.728436 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:30.729741 master-0 kubenswrapper[7480]: I0308 21:57:30.728845 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:30.729741 master-0 kubenswrapper[7480]: I0308 21:57:30.729698 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:30.730015 master-0 kubenswrapper[7480]: I0308 21:57:30.728748 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.732732 master-0 kubenswrapper[7480]: I0308 21:57:30.732673 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 21:57:30.736581 master-0 kubenswrapper[7480]: I0308 21:57:30.735241 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 21:57:30.736581 master-0 kubenswrapper[7480]: I0308 21:57:30.735693 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.740959 master-0 kubenswrapper[7480]: I0308 21:57:30.739186 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.744121 master-0 kubenswrapper[7480]: I0308 21:57:30.743501 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.744121 master-0 kubenswrapper[7480]: I0308 21:57:30.743693 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744607 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744802 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744877 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744896 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744914 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744917 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.744921 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.745159 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 21:57:30.745431 master-0 kubenswrapper[7480]: I0308 21:57:30.745276 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 21:57:30.746173 master-0 kubenswrapper[7480]: I0308 21:57:30.745731 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:30.746173 master-0 kubenswrapper[7480]: I0308 21:57:30.745731 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:30.753140 master-0 kubenswrapper[7480]: I0308 21:57:30.753020 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 21:57:30.753442 master-0 kubenswrapper[7480]: I0308 21:57:30.753387 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 21:57:30.754113 master-0 kubenswrapper[7480]: I0308 21:57:30.754029 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 21:57:30.754697 master-0 kubenswrapper[7480]: I0308 21:57:30.754657 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:30.755598 master-0 kubenswrapper[7480]: I0308 21:57:30.755552 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.755832 master-0 kubenswrapper[7480]: I0308 21:57:30.755793 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 21:57:30.755959 master-0 kubenswrapper[7480]: I0308 21:57:30.755913 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 21:57:30.757003 master-0 kubenswrapper[7480]: I0308 21:57:30.756963 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 21:57:30.757206 master-0 kubenswrapper[7480]: I0308 21:57:30.757162 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.757324 master-0 kubenswrapper[7480]: I0308 21:57:30.757221 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 21:57:30.757324 master-0 kubenswrapper[7480]: I0308 21:57:30.757265 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 21:57:30.757469 master-0 kubenswrapper[7480]: I0308 21:57:30.757353 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 21:57:30.757469 master-0 kubenswrapper[7480]: I0308 21:57:30.757398 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.757469 master-0 kubenswrapper[7480]: I0308 21:57:30.757441 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 21:57:30.757469 master-0 kubenswrapper[7480]: I0308 21:57:30.757268 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 21:57:30.757716 master-0 kubenswrapper[7480]: I0308 21:57:30.757483 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 21:57:30.757716 master-0 kubenswrapper[7480]: I0308 21:57:30.757361 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.757716 master-0 kubenswrapper[7480]: I0308 21:57:30.757547 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 21:57:30.757716 master-0 kubenswrapper[7480]: I0308 21:57:30.757563 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 21:57:30.757716 master-0 kubenswrapper[7480]: I0308 21:57:30.757637 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 21:57:30.757716 master-0 kubenswrapper[7480]: I0308 21:57:30.757710 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.758183 master-0 kubenswrapper[7480]: I0308 21:57:30.757752 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 21:57:30.758183 master-0 kubenswrapper[7480]: I0308 21:57:30.757805 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 21:57:30.758183 master-0 kubenswrapper[7480]: I0308 21:57:30.757850 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 21:57:30.758183 master-0 kubenswrapper[7480]: I0308 21:57:30.757905 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 21:57:30.758183 master-0 kubenswrapper[7480]: I0308 21:57:30.757967 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.758183 master-0 kubenswrapper[7480]: I0308 21:57:30.758004 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 21:57:30.758655 master-0 kubenswrapper[7480]: I0308 21:57:30.758267 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 21:57:30.758655 master-0 kubenswrapper[7480]: I0308 21:57:30.758383 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 21:57:30.758655 master-0 kubenswrapper[7480]: I0308 21:57:30.758277 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 21:57:30.758655 master-0 kubenswrapper[7480]: I0308 21:57:30.758594 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 21:57:30.758655 master-0 kubenswrapper[7480]: I0308 21:57:30.758613 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 21:57:30.758945 master-0 kubenswrapper[7480]: I0308 21:57:30.758456 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 21:57:30.758945 master-0 kubenswrapper[7480]: I0308 21:57:30.758712 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 21:57:30.758945 master-0 kubenswrapper[7480]: I0308 21:57:30.758765 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 21:57:30.758945 master-0 kubenswrapper[7480]: I0308 21:57:30.758887 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 21:57:30.759891 master-0 kubenswrapper[7480]: I0308 21:57:30.759854 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 21:57:30.760450 master-0 kubenswrapper[7480]: I0308 21:57:30.760387 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 21:57:30.760888 master-0 kubenswrapper[7480]: I0308 21:57:30.760833 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 21:57:30.760999 master-0 kubenswrapper[7480]: I0308 21:57:30.760956 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.761129 master-0 kubenswrapper[7480]: I0308 21:57:30.761038 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 21:57:30.761219 master-0 kubenswrapper[7480]: I0308 21:57:30.761154 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 21:57:30.762970 master-0 kubenswrapper[7480]: I0308 21:57:30.762934 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 21:57:30.763582 master-0 kubenswrapper[7480]: I0308 21:57:30.763465 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 21:57:30.763582 master-0 kubenswrapper[7480]: I0308 21:57:30.763492 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 21:57:30.763872 master-0 kubenswrapper[7480]: I0308 21:57:30.763531 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.764785 master-0 kubenswrapper[7480]: I0308 21:57:30.764590 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 21:57:30.764785 master-0 kubenswrapper[7480]: I0308 21:57:30.764734 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 21:57:30.776225 master-0 kubenswrapper[7480]: I0308 21:57:30.774716 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 21:57:30.776225 master-0 kubenswrapper[7480]: I0308 21:57:30.775232 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.776225 master-0 kubenswrapper[7480]: I0308 21:57:30.776257 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 21:57:30.776225 master-0 kubenswrapper[7480]: I0308 21:57:30.776355 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 21:57:30.780606 master-0 kubenswrapper[7480]: I0308 21:57:30.776939 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 21:57:30.781924 master-0 kubenswrapper[7480]: I0308 21:57:30.781802 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 21:57:30.783160 master-0 kubenswrapper[7480]: I0308 21:57:30.782784 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.783160 master-0 kubenswrapper[7480]: I0308 21:57:30.782838 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 21:57:30.783160 master-0 kubenswrapper[7480]: I0308 21:57:30.782907 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 21:57:30.783160 master-0 kubenswrapper[7480]: I0308 21:57:30.782993 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 21:57:30.783160 master-0 kubenswrapper[7480]: I0308 21:57:30.783132 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 21:57:30.783160 master-0 kubenswrapper[7480]: I0308 21:57:30.783160 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 21:57:30.783471 master-0 kubenswrapper[7480]: I0308 21:57:30.783197 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 21:57:30.783471 master-0 kubenswrapper[7480]: I0308 21:57:30.783305 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 21:57:30.783471 master-0 kubenswrapper[7480]: I0308 21:57:30.783317 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 21:57:30.783471 master-0 kubenswrapper[7480]: I0308 21:57:30.783394 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 21:57:30.783471 master-0 kubenswrapper[7480]: I0308 21:57:30.783448 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 21:57:30.783471 master-0 kubenswrapper[7480]: I0308 21:57:30.783466 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783514 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783308 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783627 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783467 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783728 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783670 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783874 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 21:57:30.784482 master-0 kubenswrapper[7480]: I0308 21:57:30.783904 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 21:57:30.786236 master-0 kubenswrapper[7480]: I0308 21:57:30.785742 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 21:57:30.786236 master-0 kubenswrapper[7480]: I0308 21:57:30.785995 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 21:57:30.786693 master-0 kubenswrapper[7480]: I0308 21:57:30.786391 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 21:57:30.786693 master-0 kubenswrapper[7480]: I0308 21:57:30.786472 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 21:57:30.786693 master-0 kubenswrapper[7480]: I0308 21:57:30.786487 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 21:57:30.786903 master-0 kubenswrapper[7480]: I0308 21:57:30.786868 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 21:57:30.786903 master-0 kubenswrapper[7480]: I0308 21:57:30.786872 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 21:57:30.787153 master-0 kubenswrapper[7480]: I0308 21:57:30.787110 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 21:57:30.788091 master-0 kubenswrapper[7480]: I0308 21:57:30.788021 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 21:57:30.788171 master-0 kubenswrapper[7480]: I0308 21:57:30.788113 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 21:57:30.788225 master-0 kubenswrapper[7480]: I0308 21:57:30.788186 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 21:57:30.793609 master-0 kubenswrapper[7480]: I0308 21:57:30.793514 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.793746 master-0 kubenswrapper[7480]: I0308 21:57:30.793671 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:30.793823 master-0 kubenswrapper[7480]: I0308 21:57:30.793774 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:30.793889 master-0 kubenswrapper[7480]: I0308 21:57:30.793817 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.793948 master-0 kubenswrapper[7480]: I0308 21:57:30.793915 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.794007 master-0 kubenswrapper[7480]: I0308 21:57:30.793967 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:30.794065 master-0 kubenswrapper[7480]: I0308 21:57:30.794016 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.794162 master-0 kubenswrapper[7480]: I0308 21:57:30.794062 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.794162 master-0 kubenswrapper[7480]: I0308 21:57:30.794155 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.794289 master-0 kubenswrapper[7480]: I0308 21:57:30.794204 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.794289 master-0 kubenswrapper[7480]: I0308 21:57:30.794247 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.794460 master-0 kubenswrapper[7480]: I0308 21:57:30.794291 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.794460 master-0 kubenswrapper[7480]: I0308 21:57:30.794331 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:30.794460 master-0 kubenswrapper[7480]: I0308 21:57:30.794408 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:30.794460 master-0 kubenswrapper[7480]: I0308 21:57:30.794450 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.794667 master-0 kubenswrapper[7480]: I0308 21:57:30.794537 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:30.794667 master-0 kubenswrapper[7480]: I0308 21:57:30.794587 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:30.794667 master-0 kubenswrapper[7480]: I0308 21:57:30.794634 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.794831 master-0 kubenswrapper[7480]: I0308 21:57:30.794680 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:30.794831 master-0 kubenswrapper[7480]: I0308 21:57:30.794731 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.794831 master-0 kubenswrapper[7480]: I0308 21:57:30.794778 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:30.794831 master-0 kubenswrapper[7480]: I0308 21:57:30.794822 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:30.795038 master-0 kubenswrapper[7480]: I0308 21:57:30.794860 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:30.795038 master-0 kubenswrapper[7480]: I0308 21:57:30.794907 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:30.795038 master-0 kubenswrapper[7480]: I0308 21:57:30.794948 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.795038 master-0 kubenswrapper[7480]: I0308 21:57:30.794987 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.795038 master-0 kubenswrapper[7480]: I0308 21:57:30.795027 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:30.795808 master-0 kubenswrapper[7480]: I0308 21:57:30.795752 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:30.796378 master-0 kubenswrapper[7480]: I0308 21:57:30.796330 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:30.796378 master-0 kubenswrapper[7480]: I0308 21:57:30.796382 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:30.796378 master-0 kubenswrapper[7480]: I0308 21:57:30.796744 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:30.797145 master-0 kubenswrapper[7480]: I0308 21:57:30.796993 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.797459 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.797943 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.798474 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.798662 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.798777 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799051 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799260 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799259 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799341 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799409 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799436 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:30.799647 master-0 kubenswrapper[7480]: I0308 21:57:30.799633 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.800328 master-0 kubenswrapper[7480]: I0308 21:57:30.799731 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:30.800328 master-0 kubenswrapper[7480]: I0308 21:57:30.799821 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:30.800328 master-0 kubenswrapper[7480]: I0308 21:57:30.800244 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 21:57:30.801122 master-0 kubenswrapper[7480]: I0308 21:57:30.801048 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:30.801369 master-0 kubenswrapper[7480]: I0308 21:57:30.801315 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.801863 master-0 kubenswrapper[7480]: I0308 21:57:30.801786 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.802019 master-0 kubenswrapper[7480]: I0308 21:57:30.801962 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:30.802140 master-0 kubenswrapper[7480]: I0308 21:57:30.802055 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.802201 master-0 kubenswrapper[7480]: I0308 21:57:30.802145 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:30.802259 master-0 kubenswrapper[7480]: I0308 21:57:30.802203 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.802438 master-0 kubenswrapper[7480]: I0308 21:57:30.802390 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.802438 master-0 kubenswrapper[7480]: I0308 21:57:30.802414 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:30.802553 master-0 kubenswrapper[7480]: I0308 21:57:30.802476 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.802609 master-0 kubenswrapper[7480]: I0308 21:57:30.802545 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:30.802657 master-0 kubenswrapper[7480]: I0308 21:57:30.802601 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.802767 master-0 kubenswrapper[7480]: I0308 21:57:30.802729 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.802990 master-0 kubenswrapper[7480]: I0308 21:57:30.802945 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:30.803315 master-0 kubenswrapper[7480]: I0308 21:57:30.803263 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:30.803401 master-0 kubenswrapper[7480]: I0308 21:57:30.803364 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:30.803693 master-0 kubenswrapper[7480]: I0308 21:57:30.803641 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.803693 master-0 kubenswrapper[7480]: I0308 21:57:30.803687 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:30.803822 master-0 kubenswrapper[7480]: I0308 21:57:30.803726 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.803822 master-0 kubenswrapper[7480]: I0308 21:57:30.803756 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:30.804112 master-0 kubenswrapper[7480]: I0308 21:57:30.804055 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.804304 master-0 kubenswrapper[7480]: I0308 21:57:30.804166 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:30.804304 master-0 kubenswrapper[7480]: I0308 21:57:30.804202 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.804304 master-0 kubenswrapper[7480]: I0308 21:57:30.804252 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.805282 master-0 kubenswrapper[7480]: I0308 21:57:30.805240 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.805351 master-0 kubenswrapper[7480]: I0308 21:57:30.805298 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:30.805351 master-0 kubenswrapper[7480]: I0308 21:57:30.805328 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.805478 master-0 kubenswrapper[7480]: I0308 21:57:30.805344 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.805478 master-0 kubenswrapper[7480]: I0308 21:57:30.805435 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:30.805584 master-0 kubenswrapper[7480]: I0308 21:57:30.805333 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:30.805652 master-0 kubenswrapper[7480]: I0308 21:57:30.805612 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.805652 master-0 kubenswrapper[7480]: I0308 21:57:30.804303 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.805754 master-0 kubenswrapper[7480]: I0308 21:57:30.805658 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.805754 master-0 kubenswrapper[7480]: I0308 21:57:30.805690 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:30.805754 master-0 kubenswrapper[7480]: I0308 21:57:30.805715 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.805901 master-0 kubenswrapper[7480]: I0308 21:57:30.805765 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.805901 master-0 kubenswrapper[7480]: I0308 21:57:30.805854 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.806009 master-0 kubenswrapper[7480]: I0308 21:57:30.805902 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.806009 master-0 kubenswrapper[7480]: I0308 21:57:30.805961 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:30.806148 master-0 kubenswrapper[7480]: I0308 21:57:30.806021 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:30.806521 master-0 kubenswrapper[7480]: I0308 21:57:30.806471 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 21:57:30.806595 master-0 kubenswrapper[7480]: I0308 21:57:30.806580 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.806650 master-0 kubenswrapper[7480]: I0308 21:57:30.806630 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.806842 master-0 kubenswrapper[7480]: I0308 21:57:30.806786 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:30.806842 master-0 kubenswrapper[7480]: I0308 21:57:30.806824 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:30.806986 master-0 kubenswrapper[7480]: I0308 21:57:30.806907 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:30.806986 master-0 kubenswrapper[7480]: I0308 21:57:30.806911 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.807135 master-0 kubenswrapper[7480]: I0308 21:57:30.807038 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.807135 master-0 kubenswrapper[7480]: I0308 21:57:30.807111 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:30.807246 master-0 kubenswrapper[7480]: I0308 21:57:30.807155 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:30.807246 master-0 kubenswrapper[7480]: I0308 21:57:30.807197 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:30.807246 master-0 kubenswrapper[7480]: I0308 21:57:30.807237 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.807400 master-0 kubenswrapper[7480]: I0308 21:57:30.807273 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:30.807400 master-0 kubenswrapper[7480]: I0308 21:57:30.807303 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 21:57:30.807400 master-0 kubenswrapper[7480]: I0308 21:57:30.807313 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:30.807400 master-0 kubenswrapper[7480]: I0308 21:57:30.807353 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.807400 master-0 kubenswrapper[7480]: I0308 21:57:30.807388 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.807400 master-0 kubenswrapper[7480]: I0308 21:57:30.807389 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.807689 master-0 kubenswrapper[7480]: I0308 21:57:30.807422 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.807689 master-0 kubenswrapper[7480]: I0308 21:57:30.807456 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.807689 master-0 kubenswrapper[7480]: I0308 21:57:30.807596 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:30.808362 master-0 kubenswrapper[7480]: I0308 21:57:30.808320 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.808362 master-0 kubenswrapper[7480]: I0308 21:57:30.808345 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:30.808506 master-0 kubenswrapper[7480]: I0308 21:57:30.808365 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.808506 master-0 kubenswrapper[7480]: I0308 21:57:30.808446 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:30.808506 master-0 kubenswrapper[7480]: I0308 21:57:30.808480 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.808660 master-0 kubenswrapper[7480]: I0308 21:57:30.808515 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.808660 master-0 kubenswrapper[7480]: I0308 21:57:30.808552 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:30.808660 master-0 kubenswrapper[7480]: I0308 21:57:30.808588 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.808660 master-0 kubenswrapper[7480]: I0308 21:57:30.808624 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.808854 master-0 kubenswrapper[7480]: I0308 21:57:30.808661 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:30.808854 master-0 kubenswrapper[7480]: I0308 21:57:30.808697 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:30.808854 master-0 kubenswrapper[7480]: I0308 21:57:30.808735 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.808854 master-0 kubenswrapper[7480]: I0308 21:57:30.808770 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.808854 master-0 kubenswrapper[7480]: I0308 21:57:30.808851 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.808888 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.808911 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.808925 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.808963 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.809003 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.809059 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:30.809146 master-0 kubenswrapper[7480]: I0308 21:57:30.809010 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:30.809479 master-0 kubenswrapper[7480]: I0308 21:57:30.809179 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:30.809479 master-0 kubenswrapper[7480]: I0308 21:57:30.809183 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:30.809479 master-0 kubenswrapper[7480]: I0308 21:57:30.809392 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:30.809479 master-0 kubenswrapper[7480]: I0308 21:57:30.809405 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:30.809674 master-0 kubenswrapper[7480]: I0308 21:57:30.809618 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:30.809674 master-0 kubenswrapper[7480]: I0308 21:57:30.809634 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:30.809847 master-0 kubenswrapper[7480]: I0308 21:57:30.809713 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.809847 master-0 kubenswrapper[7480]: I0308 21:57:30.809753 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.809847 master-0 kubenswrapper[7480]: I0308 21:57:30.809770 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:30.809847 master-0 kubenswrapper[7480]: I0308 21:57:30.809712 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.809847 master-0 kubenswrapper[7480]: I0308 21:57:30.809830 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.810121 master-0 kubenswrapper[7480]: I0308 21:57:30.809827 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.810121 master-0 kubenswrapper[7480]: I0308 21:57:30.809982 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:30.810121 master-0 kubenswrapper[7480]: I0308 21:57:30.810031 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:30.810288 master-0 kubenswrapper[7480]: I0308 21:57:30.810129 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.810288 master-0 kubenswrapper[7480]: I0308 21:57:30.810227 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.810383 master-0 kubenswrapper[7480]: I0308 21:57:30.810338 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.810468 master-0 kubenswrapper[7480]: I0308 21:57:30.810415 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.810468 master-0 kubenswrapper[7480]: I0308 21:57:30.810457 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.810597 master-0 kubenswrapper[7480]: I0308 21:57:30.810496 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.810749 master-0 kubenswrapper[7480]: I0308 21:57:30.810696 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.810814 master-0 kubenswrapper[7480]: I0308 21:57:30.810744 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.810814 master-0 kubenswrapper[7480]: I0308 21:57:30.810801 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:30.814573 master-0 kubenswrapper[7480]: I0308 21:57:30.814527 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 21:57:30.814912 master-0 kubenswrapper[7480]: I0308 21:57:30.814855 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:30.819671 master-0 kubenswrapper[7480]: I0308 21:57:30.819625 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.834594 master-0 kubenswrapper[7480]: I0308 21:57:30.834541 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 21:57:30.838352 master-0 kubenswrapper[7480]: I0308 21:57:30.838303 7480 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 08 21:57:30.902239 master-0 kubenswrapper[7480]: I0308 21:57:30.902124 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 21:57:30.912032 master-0 kubenswrapper[7480]: I0308 21:57:30.911950 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.912032 master-0 kubenswrapper[7480]: I0308 21:57:30.912006 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.912437 master-0 kubenswrapper[7480]: I0308 21:57:30.912193 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.912437 master-0 kubenswrapper[7480]: I0308 21:57:30.912272 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.912437 master-0 kubenswrapper[7480]: I0308 21:57:30.912400 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912469 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912516 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912543 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912569 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912689 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912742 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912789 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912831 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:30.912944 master-0 kubenswrapper[7480]: I0308 21:57:30.912882 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913000 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913116 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913196 7480 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913138 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913270 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913168 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913238 7480 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913282 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.413249315 +0000 UTC m=+1.866869957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913431 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.413362828 +0000 UTC m=+1.866983660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913505 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913575 7480 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913589 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913621 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913643 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.413610614 +0000 UTC m=+1.867231246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: E0308 21:57:30.913679 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.413666885 +0000 UTC m=+1.867287527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : secret "metrics-daemon-secret" not found Mar 08 21:57:30.914207 master-0 kubenswrapper[7480]: I0308 21:57:30.913728 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.915791 master-0 kubenswrapper[7480]: I0308 21:57:30.914837 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.915791 master-0 kubenswrapper[7480]: I0308 21:57:30.914958 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.916762 master-0 kubenswrapper[7480]: I0308 21:57:30.916704 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.916894 master-0 kubenswrapper[7480]: I0308 21:57:30.916769 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.916894 master-0 kubenswrapper[7480]: I0308 21:57:30.916808 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.916894 master-0 kubenswrapper[7480]: I0308 21:57:30.916845 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.916894 master-0 kubenswrapper[7480]: I0308 21:57:30.916844 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.917141 master-0 kubenswrapper[7480]: I0308 21:57:30.916931 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.917141 master-0 kubenswrapper[7480]: E0308 21:57:30.916972 7480 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:30.917141 master-0 kubenswrapper[7480]: I0308 21:57:30.916984 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.917141 master-0 kubenswrapper[7480]: E0308 21:57:30.917019 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.417002173 +0000 UTC m=+1.870622815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:30.917141 master-0 kubenswrapper[7480]: I0308 21:57:30.917058 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.917449 master-0 kubenswrapper[7480]: I0308 21:57:30.917158 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.917449 master-0 kubenswrapper[7480]: I0308 21:57:30.917352 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.918344 master-0 kubenswrapper[7480]: I0308 21:57:30.917130 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.918485 master-0 kubenswrapper[7480]: I0308 21:57:30.918404 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.918613 master-0 kubenswrapper[7480]: I0308 21:57:30.918516 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.918932 master-0 kubenswrapper[7480]: I0308 21:57:30.918655 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.918932 master-0 kubenswrapper[7480]: I0308 21:57:30.918659 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.918932 master-0 kubenswrapper[7480]: I0308 21:57:30.918766 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.918932 master-0 kubenswrapper[7480]: I0308 21:57:30.918784 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:30.918932 master-0 kubenswrapper[7480]: I0308 21:57:30.918872 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.918932 master-0 kubenswrapper[7480]: I0308 21:57:30.918911 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: E0308 21:57:30.918953 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: I0308 21:57:30.919034 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: E0308 21:57:30.919050 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.419023535 +0000 UTC m=+1.872644377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: I0308 21:57:30.918964 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: I0308 21:57:30.919184 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: E0308 21:57:30.919363 7480 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:30.919451 master-0 kubenswrapper[7480]: E0308 21:57:30.919424 7480 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:30.919838 master-0 kubenswrapper[7480]: E0308 21:57:30.919493 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.419469436 +0000 UTC m=+1.873090078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:30.919904 master-0 kubenswrapper[7480]: E0308 21:57:30.919861 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.419753544 +0000 UTC m=+1.873374186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:30.919904 master-0 kubenswrapper[7480]: I0308 21:57:30.919881 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.920029 master-0 kubenswrapper[7480]: I0308 21:57:30.919930 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.920320 master-0 kubenswrapper[7480]: I0308 21:57:30.920240 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.920421 master-0 kubenswrapper[7480]: I0308 21:57:30.920296 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.920951 master-0 kubenswrapper[7480]: I0308 21:57:30.920016 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921135 master-0 kubenswrapper[7480]: I0308 21:57:30.921049 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921262 master-0 kubenswrapper[7480]: I0308 21:57:30.921200 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921393 master-0 kubenswrapper[7480]: I0308 21:57:30.921340 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921481 master-0 kubenswrapper[7480]: I0308 21:57:30.921445 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921574 master-0 kubenswrapper[7480]: I0308 21:57:30.921508 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.921574 master-0 kubenswrapper[7480]: I0308 21:57:30.921550 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:30.921730 master-0 kubenswrapper[7480]: I0308 21:57:30.921587 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.921730 master-0 kubenswrapper[7480]: I0308 21:57:30.921643 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 21:57:30.921730 master-0 kubenswrapper[7480]: I0308 21:57:30.921719 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921977 master-0 kubenswrapper[7480]: I0308 21:57:30.921826 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.921977 master-0 kubenswrapper[7480]: I0308 21:57:30.921902 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:30.921977 master-0 kubenswrapper[7480]: I0308 21:57:30.921658 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:30.922269 master-0 kubenswrapper[7480]: E0308 21:57:30.921830 7480 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:30.922269 master-0 kubenswrapper[7480]: I0308 21:57:30.922107 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.922269 master-0 kubenswrapper[7480]: E0308 21:57:30.922167 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.422138405 +0000 UTC m=+1.875759047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:30.922269 master-0 kubenswrapper[7480]: I0308 21:57:30.922251 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.922518 master-0 kubenswrapper[7480]: I0308 21:57:30.922320 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:30.922518 master-0 kubenswrapper[7480]: I0308 21:57:30.922389 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:30.922518 master-0 kubenswrapper[7480]: I0308 21:57:30.922436 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.922518 master-0 kubenswrapper[7480]: I0308 21:57:30.922482 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.922518 master-0 kubenswrapper[7480]: I0308 21:57:30.922519 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.922815 master-0 kubenswrapper[7480]: I0308 21:57:30.922583 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.922815 master-0 kubenswrapper[7480]: I0308 21:57:30.922624 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.922815 master-0 kubenswrapper[7480]: I0308 21:57:30.922707 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.922815 master-0 kubenswrapper[7480]: I0308 21:57:30.922760 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.922815 master-0 kubenswrapper[7480]: I0308 21:57:30.922798 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:30.923102 master-0 kubenswrapper[7480]: I0308 21:57:30.922835 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.923102 master-0 kubenswrapper[7480]: I0308 21:57:30.922888 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.923102 master-0 kubenswrapper[7480]: I0308 21:57:30.922928 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.923102 master-0 kubenswrapper[7480]: I0308 21:57:30.922972 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.923102 master-0 kubenswrapper[7480]: I0308 21:57:30.923018 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:30.923102 master-0 kubenswrapper[7480]: I0308 21:57:30.923057 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: I0308 21:57:30.923123 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: I0308 21:57:30.923174 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: I0308 21:57:30.923206 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: I0308 21:57:30.923314 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: E0308 21:57:30.923341 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: E0308 21:57:30.923390 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.423374468 +0000 UTC m=+1.876995110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:30.923418 master-0 kubenswrapper[7480]: E0308 21:57:30.923395 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:30.923784 master-0 kubenswrapper[7480]: E0308 21:57:30.923447 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.423429659 +0000 UTC m=+1.877050301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:30.923784 master-0 kubenswrapper[7480]: I0308 21:57:30.923464 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.923784 master-0 kubenswrapper[7480]: E0308 21:57:30.923521 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:30.923784 master-0 kubenswrapper[7480]: E0308 21:57:30.923561 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.423549132 +0000 UTC m=+1.877169774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:30.923784 master-0 kubenswrapper[7480]: I0308 21:57:30.923624 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:30.923784 master-0 kubenswrapper[7480]: I0308 21:57:30.923734 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.924152 master-0 kubenswrapper[7480]: E0308 21:57:30.923825 7480 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:30.924152 master-0 kubenswrapper[7480]: I0308 21:57:30.923832 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:30.924152 master-0 kubenswrapper[7480]: E0308 21:57:30.923863 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:31.423850951 +0000 UTC m=+1.877471583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:30.924152 master-0 kubenswrapper[7480]: I0308 21:57:30.923912 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.924152 master-0 kubenswrapper[7480]: I0308 21:57:30.923927 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:30.924503 master-0 kubenswrapper[7480]: I0308 21:57:30.924303 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:30.944673 master-0 kubenswrapper[7480]: I0308 21:57:30.944577 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 21:57:30.958362 master-0 kubenswrapper[7480]: I0308 21:57:30.958060 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:30.979446 master-0 kubenswrapper[7480]: I0308 21:57:30.979285 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:30.999725 master-0 kubenswrapper[7480]: I0308 21:57:30.999635 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 21:57:31.012666 master-0 kubenswrapper[7480]: I0308 21:57:31.012597 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 21:57:31.024453 master-0 kubenswrapper[7480]: I0308 21:57:31.024401 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:31.024532 master-0 kubenswrapper[7480]: I0308 21:57:31.024514 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.024720 master-0 kubenswrapper[7480]: I0308 21:57:31.024595 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:31.024720 master-0 kubenswrapper[7480]: I0308 21:57:31.024697 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.024932 master-0 kubenswrapper[7480]: I0308 21:57:31.024832 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025041 master-0 kubenswrapper[7480]: I0308 21:57:31.024985 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025131 master-0 kubenswrapper[7480]: I0308 21:57:31.025095 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025230 master-0 kubenswrapper[7480]: I0308 21:57:31.025158 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025429 master-0 kubenswrapper[7480]: I0308 21:57:31.025384 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025536 master-0 kubenswrapper[7480]: I0308 21:57:31.025507 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025597 master-0 kubenswrapper[7480]: I0308 21:57:31.025558 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025597 master-0 kubenswrapper[7480]: I0308 21:57:31.025560 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025597 master-0 kubenswrapper[7480]: I0308 21:57:31.025544 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025741 master-0 kubenswrapper[7480]: I0308 21:57:31.025601 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025792 master-0 kubenswrapper[7480]: I0308 21:57:31.025728 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.025792 master-0 kubenswrapper[7480]: I0308 21:57:31.025783 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026111 master-0 kubenswrapper[7480]: I0308 21:57:31.026034 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026188 master-0 kubenswrapper[7480]: I0308 21:57:31.026115 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026188 master-0 kubenswrapper[7480]: I0308 21:57:31.026145 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026286 master-0 kubenswrapper[7480]: I0308 21:57:31.026215 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026333 master-0 kubenswrapper[7480]: I0308 21:57:31.026306 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026382 master-0 kubenswrapper[7480]: I0308 21:57:31.026369 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026442 master-0 kubenswrapper[7480]: I0308 21:57:31.026372 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026491 master-0 kubenswrapper[7480]: I0308 21:57:31.026437 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026491 master-0 kubenswrapper[7480]: I0308 21:57:31.026475 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026491 master-0 kubenswrapper[7480]: I0308 21:57:31.026483 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026627 master-0 kubenswrapper[7480]: I0308 21:57:31.026569 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026627 master-0 kubenswrapper[7480]: I0308 21:57:31.026612 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026758 master-0 kubenswrapper[7480]: I0308 21:57:31.026734 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.026841 master-0 kubenswrapper[7480]: I0308 21:57:31.026809 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.027325 master-0 kubenswrapper[7480]: I0308 21:57:31.026982 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.027325 master-0 kubenswrapper[7480]: I0308 21:57:31.027026 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.040726 master-0 kubenswrapper[7480]: I0308 21:57:31.039296 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 21:57:31.052524 master-0 kubenswrapper[7480]: I0308 21:57:31.045862 7480 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 21:57:31.062424 master-0 kubenswrapper[7480]: I0308 21:57:31.061617 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:31.075545 master-0 kubenswrapper[7480]: I0308 21:57:31.075468 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:31.095134 master-0 kubenswrapper[7480]: I0308 21:57:31.094021 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 21:57:31.114263 master-0 kubenswrapper[7480]: I0308 21:57:31.114199 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 21:57:31.128816 master-0 kubenswrapper[7480]: I0308 21:57:31.128744 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 21:57:31.154517 master-0 kubenswrapper[7480]: I0308 21:57:31.154068 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 21:57:31.169265 master-0 kubenswrapper[7480]: I0308 21:57:31.169198 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:31.192826 master-0 kubenswrapper[7480]: I0308 21:57:31.192604 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:31.207102 master-0 kubenswrapper[7480]: I0308 21:57:31.206449 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:31.229415 master-0 kubenswrapper[7480]: I0308 21:57:31.229365 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 21:57:31.260061 master-0 kubenswrapper[7480]: I0308 21:57:31.260002 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:31.270027 master-0 kubenswrapper[7480]: I0308 21:57:31.269985 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 21:57:31.293106 master-0 kubenswrapper[7480]: I0308 21:57:31.293013 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:31.311479 master-0 kubenswrapper[7480]: I0308 21:57:31.311427 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:31.335233 master-0 kubenswrapper[7480]: I0308 21:57:31.335162 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:31.353552 master-0 kubenswrapper[7480]: I0308 21:57:31.353499 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 21:57:31.375522 master-0 kubenswrapper[7480]: I0308 21:57:31.375401 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:31.392002 master-0 kubenswrapper[7480]: I0308 21:57:31.391948 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:31.422112 master-0 kubenswrapper[7480]: I0308 21:57:31.416516 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 21:57:31.435409 master-0 kubenswrapper[7480]: I0308 21:57:31.435179 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:31.435409 master-0 kubenswrapper[7480]: I0308 21:57:31.435251 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:31.435409 master-0 kubenswrapper[7480]: I0308 21:57:31.435288 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:31.435409 master-0 kubenswrapper[7480]: I0308 21:57:31.435317 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:31.435409 master-0 kubenswrapper[7480]: I0308 21:57:31.435359 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:31.435409 master-0 kubenswrapper[7480]: I0308 21:57:31.435395 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435422 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435454 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435479 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435511 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435531 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435582 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:31.435670 master-0 kubenswrapper[7480]: I0308 21:57:31.435614 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:31.435911 master-0 kubenswrapper[7480]: E0308 21:57:31.435751 7480 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:31.435911 master-0 kubenswrapper[7480]: E0308 21:57:31.435818 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.435799246 +0000 UTC m=+2.889419848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436464 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436501 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436491064 +0000 UTC m=+2.890111666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436545 7480 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436568 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436560566 +0000 UTC m=+2.890181178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436610 7480 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436644 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436626528 +0000 UTC m=+2.890247130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436687 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436710 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.4367036 +0000 UTC m=+2.890324202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436751 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436774 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436766681 +0000 UTC m=+2.890387283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436812 7480 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436834 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436825473 +0000 UTC m=+2.890446075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436872 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436892 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436884884 +0000 UTC m=+2.890505496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436929 7480 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436951 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.436945086 +0000 UTC m=+2.890565688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.436991 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437012 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.437005667 +0000 UTC m=+2.890626289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437054 7480 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437097 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.437088279 +0000 UTC m=+2.890708891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437145 7480 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437167 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.437159461 +0000 UTC m=+2.890780063 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : secret "metrics-daemon-secret" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437210 7480 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:31.437562 master-0 kubenswrapper[7480]: E0308 21:57:31.437232 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:32.437225103 +0000 UTC m=+2.890845715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:31.438572 master-0 kubenswrapper[7480]: I0308 21:57:31.438376 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 21:57:31.457414 master-0 kubenswrapper[7480]: I0308 21:57:31.456787 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:31.474486 master-0 kubenswrapper[7480]: I0308 21:57:31.474451 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:31.486296 master-0 kubenswrapper[7480]: E0308 21:57:31.486252 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 21:57:31.505531 master-0 kubenswrapper[7480]: W0308 21:57:31.504842 7480 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 08 21:57:31.505531 master-0 kubenswrapper[7480]: E0308 21:57:31.504956 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:57:31.536990 master-0 kubenswrapper[7480]: E0308 21:57:31.535820 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:31.541145 master-0 kubenswrapper[7480]: E0308 21:57:31.541108 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:31.593521 master-0 kubenswrapper[7480]: I0308 21:57:31.593360 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 21:57:31.613175 master-0 kubenswrapper[7480]: I0308 21:57:31.613133 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 21:57:31.635728 master-0 kubenswrapper[7480]: I0308 21:57:31.635677 7480 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 21:57:31.646092 master-0 kubenswrapper[7480]: I0308 21:57:31.644331 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:31.649456 master-0 kubenswrapper[7480]: I0308 21:57:31.649321 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:31.656065 master-0 kubenswrapper[7480]: I0308 21:57:31.655841 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:31.884398 master-0 kubenswrapper[7480]: I0308 21:57:31.883635 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerStarted","Data":"334ebc87bbf952673cd1b3477f45396aaf813413e807f2bdfa8f48d87bc817d9"} Mar 08 21:57:31.889090 master-0 kubenswrapper[7480]: I0308 21:57:31.887734 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerStarted","Data":"c086cbd7303ffe955bb2645d06594a1046769c847ec0d61ce7c507a7b2e3ee42"} Mar 08 21:57:31.892253 master-0 kubenswrapper[7480]: I0308 21:57:31.889325 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerStarted","Data":"939aa1886a91ab1eb51e8a1cf13c57622098c7bede001e5d513bea76546b85fa"} Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.896465 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerStarted","Data":"a33aa7650397c6fcbc3db8208664515afb6c26ede2b1533a472f078a2d4a0ea4"} Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.898571 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerStarted","Data":"fa11530abd773575590a911f848030e060ab34b160f17f0ed7e7dadcd26f2550"} Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.903380 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerStarted","Data":"6df6f113522fa49700aeaebc115d4f7bc3c6c606f1453723e6b3427085f53838"} Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.906704 7480 generic.go:334] "Generic (PLEG): container finished" podID="d0641333-feda-44c5-baf5-ceee4ce3fd8f" containerID="0a07d531f2a5fce4c32633615b34d340e2c1873fb062556ca27529a7a07f33ff" exitCode=0 Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.906747 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerDied","Data":"0a07d531f2a5fce4c32633615b34d340e2c1873fb062556ca27529a7a07f33ff"} Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.910828 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerStarted","Data":"2372290458f059a617f7c34963da0c908f74ff47559433f117b121db9f6a2646"} Mar 08 21:57:31.914150 master-0 kubenswrapper[7480]: I0308 21:57:31.912610 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerStarted","Data":"04944f14b53d02d121f70fd7c26fd29d16bc18bb4704e5d81fc7ee613027b6bb"} Mar 08 21:57:31.914851 master-0 kubenswrapper[7480]: I0308 21:57:31.914341 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerStarted","Data":"2f8d7fcda4e6f52fa1e1bae05fb59e3135aaa4a13581f1a085c1284cb2c0e356"} Mar 08 21:57:31.970934 master-0 kubenswrapper[7480]: I0308 21:57:31.960907 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-djlff"] Mar 08 21:57:31.997228 master-0 kubenswrapper[7480]: W0308 21:57:31.997168 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1b63e59_0f09_4bc2_b1e7_a9a9ba97b53e.slice/crio-1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16 WatchSource:0}: Error finding container 1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16: Status 404 returned error can't find the container with id 1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16 Mar 08 21:57:32.248664 master-0 kubenswrapper[7480]: I0308 21:57:32.248247 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:32.269851 master-0 kubenswrapper[7480]: I0308 21:57:32.269513 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453709 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453756 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453794 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453814 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453833 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453869 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453888 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453905 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453924 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453964 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.453984 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.454002 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:32.454104 master-0 kubenswrapper[7480]: I0308 21:57:32.454036 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454180 7480 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454240 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.454211865 +0000 UTC m=+4.907832467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454548 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454571 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.454563604 +0000 UTC m=+4.908184206 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454620 7480 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454638 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.454632446 +0000 UTC m=+4.908253048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:32.454679 master-0 kubenswrapper[7480]: E0308 21:57:32.454684 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:32.454864 master-0 kubenswrapper[7480]: E0308 21:57:32.454704 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.454697528 +0000 UTC m=+4.908318130 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:32.454864 master-0 kubenswrapper[7480]: E0308 21:57:32.454762 7480 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:32.454864 master-0 kubenswrapper[7480]: E0308 21:57:32.454785 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.45477834 +0000 UTC m=+4.908398942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:32.454864 master-0 kubenswrapper[7480]: E0308 21:57:32.454847 7480 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 21:57:32.454968 master-0 kubenswrapper[7480]: E0308 21:57:32.454872 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.454863312 +0000 UTC m=+4.908483914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : secret "metrics-daemon-secret" not found Mar 08 21:57:32.454968 master-0 kubenswrapper[7480]: E0308 21:57:32.454926 7480 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:32.454968 master-0 kubenswrapper[7480]: E0308 21:57:32.454945 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.454938974 +0000 UTC m=+4.908559566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:32.455065 master-0 kubenswrapper[7480]: E0308 21:57:32.454975 7480 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:32.455065 master-0 kubenswrapper[7480]: E0308 21:57:32.455009 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.455003165 +0000 UTC m=+4.908623767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:32.455065 master-0 kubenswrapper[7480]: E0308 21:57:32.455041 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:32.455174 master-0 kubenswrapper[7480]: E0308 21:57:32.455093 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.455054237 +0000 UTC m=+4.908674839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:32.455174 master-0 kubenswrapper[7480]: E0308 21:57:32.455133 7480 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:32.455174 master-0 kubenswrapper[7480]: E0308 21:57:32.455170 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.455145669 +0000 UTC m=+4.908766271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:32.455258 master-0 kubenswrapper[7480]: E0308 21:57:32.455209 7480 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:32.455258 master-0 kubenswrapper[7480]: E0308 21:57:32.455252 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.455244911 +0000 UTC m=+4.908865513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:32.455339 master-0 kubenswrapper[7480]: E0308 21:57:32.455297 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:32.455339 master-0 kubenswrapper[7480]: E0308 21:57:32.455337 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.455330854 +0000 UTC m=+4.908951446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:32.455415 master-0 kubenswrapper[7480]: E0308 21:57:32.455369 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:32.455415 master-0 kubenswrapper[7480]: E0308 21:57:32.455404 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.455381975 +0000 UTC m=+4.909002577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:32.621094 master-0 kubenswrapper[7480]: I0308 21:57:32.620340 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:32.638333 master-0 kubenswrapper[7480]: I0308 21:57:32.626802 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: I0308 21:57:32.736527 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws"] Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: E0308 21:57:32.736747 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerName="prober" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: I0308 21:57:32.736761 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerName="prober" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: E0308 21:57:32.736771 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: I0308 21:57:32.736779 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: I0308 21:57:32.736886 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15fa7c1-65ea-4956-a262-841d8a79c49f" containerName="prober" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: I0308 21:57:32.736901 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 21:57:32.738757 master-0 kubenswrapper[7480]: I0308 21:57:32.737432 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 21:57:32.746488 master-0 kubenswrapper[7480]: I0308 21:57:32.746435 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws"] Mar 08 21:57:32.759609 master-0 kubenswrapper[7480]: I0308 21:57:32.759569 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 21:57:32.777741 master-0 kubenswrapper[7480]: I0308 21:57:32.777617 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 21:57:32.860671 master-0 kubenswrapper[7480]: I0308 21:57:32.860621 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpt7\" (UniqueName: \"kubernetes.io/projected/0d0feb73-2ef6-4083-81ce-82a1394ce9c4-kube-api-access-jfpt7\") pod \"migrator-57ccdf9b5-bf6ws\" (UID: \"0d0feb73-2ef6-4083-81ce-82a1394ce9c4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 21:57:32.928911 master-0 kubenswrapper[7480]: I0308 21:57:32.928388 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-djlff" event={"ID":"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e","Type":"ContainerStarted","Data":"2035cde02874bda71dfa2e89042a27ebe4c62587d22d2cbeee64782d9acfe89b"} Mar 08 21:57:32.928911 master-0 kubenswrapper[7480]: I0308 21:57:32.928480 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-djlff" event={"ID":"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e","Type":"ContainerStarted","Data":"1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16"} Mar 08 21:57:32.928911 master-0 kubenswrapper[7480]: I0308 21:57:32.928638 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:57:32.962250 master-0 kubenswrapper[7480]: I0308 21:57:32.962083 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpt7\" (UniqueName: \"kubernetes.io/projected/0d0feb73-2ef6-4083-81ce-82a1394ce9c4-kube-api-access-jfpt7\") pod \"migrator-57ccdf9b5-bf6ws\" (UID: \"0d0feb73-2ef6-4083-81ce-82a1394ce9c4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 21:57:32.967337 master-0 kubenswrapper[7480]: I0308 21:57:32.967059 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:32.985130 master-0 kubenswrapper[7480]: I0308 21:57:32.984703 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:33.020264 master-0 kubenswrapper[7480]: I0308 21:57:33.020188 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpt7\" (UniqueName: \"kubernetes.io/projected/0d0feb73-2ef6-4083-81ce-82a1394ce9c4-kube-api-access-jfpt7\") pod \"migrator-57ccdf9b5-bf6ws\" (UID: \"0d0feb73-2ef6-4083-81ce-82a1394ce9c4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 21:57:33.050169 master-0 kubenswrapper[7480]: I0308 21:57:33.048769 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:33.069122 master-0 kubenswrapper[7480]: I0308 21:57:33.066115 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 21:57:33.408518 master-0 kubenswrapper[7480]: I0308 21:57:33.408121 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr"] Mar 08 21:57:33.408770 master-0 kubenswrapper[7480]: I0308 21:57:33.408644 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 21:57:33.434222 master-0 kubenswrapper[7480]: I0308 21:57:33.430273 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr"] Mar 08 21:57:33.526480 master-0 kubenswrapper[7480]: I0308 21:57:33.526348 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4"] Mar 08 21:57:33.526963 master-0 kubenswrapper[7480]: I0308 21:57:33.526935 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.529190 master-0 kubenswrapper[7480]: I0308 21:57:33.529005 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:33.529351 master-0 kubenswrapper[7480]: I0308 21:57:33.529328 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 21:57:33.529866 master-0 kubenswrapper[7480]: I0308 21:57:33.529811 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 21:57:33.531265 master-0 kubenswrapper[7480]: I0308 21:57:33.531242 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:33.531755 master-0 kubenswrapper[7480]: I0308 21:57:33.531529 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 21:57:33.537425 master-0 kubenswrapper[7480]: I0308 21:57:33.537288 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 21:57:33.539703 master-0 kubenswrapper[7480]: I0308 21:57:33.539667 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4"] Mar 08 21:57:33.578562 master-0 kubenswrapper[7480]: I0308 21:57:33.578493 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfqq\" (UniqueName: \"kubernetes.io/projected/c901b468-b8e9-48f8-8050-0d54e24e2adb-kube-api-access-hmfqq\") pod \"csi-snapshot-controller-7577d6f48-wklhr\" (UID: \"c901b468-b8e9-48f8-8050-0d54e24e2adb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 21:57:33.628574 master-0 kubenswrapper[7480]: I0308 21:57:33.628477 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:33.640616 master-0 kubenswrapper[7480]: I0308 21:57:33.640572 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7"] Mar 08 21:57:33.642792 master-0 kubenswrapper[7480]: I0308 21:57:33.641080 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.646406 master-0 kubenswrapper[7480]: I0308 21:57:33.646050 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 21:57:33.646406 master-0 kubenswrapper[7480]: I0308 21:57:33.646117 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 21:57:33.646406 master-0 kubenswrapper[7480]: I0308 21:57:33.646057 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 21:57:33.646406 master-0 kubenswrapper[7480]: I0308 21:57:33.646319 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:33.662303 master-0 kubenswrapper[7480]: I0308 21:57:33.661381 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:33.665868 master-0 kubenswrapper[7480]: I0308 21:57:33.665827 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7"] Mar 08 21:57:33.679641 master-0 kubenswrapper[7480]: I0308 21:57:33.679593 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.679977 master-0 kubenswrapper[7480]: I0308 21:57:33.679932 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.680215 master-0 kubenswrapper[7480]: I0308 21:57:33.680145 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.680390 master-0 kubenswrapper[7480]: I0308 21:57:33.680370 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.680663 master-0 kubenswrapper[7480]: I0308 21:57:33.680631 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782mx\" (UniqueName: \"kubernetes.io/projected/84b193de-34da-49b6-bf13-7b97399e7d07-kube-api-access-782mx\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.680843 master-0 kubenswrapper[7480]: I0308 21:57:33.680830 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmfqq\" (UniqueName: \"kubernetes.io/projected/c901b468-b8e9-48f8-8050-0d54e24e2adb-kube-api-access-hmfqq\") pod \"csi-snapshot-controller-7577d6f48-wklhr\" (UID: \"c901b468-b8e9-48f8-8050-0d54e24e2adb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 21:57:33.696586 master-0 kubenswrapper[7480]: I0308 21:57:33.696472 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:33.723261 master-0 kubenswrapper[7480]: I0308 21:57:33.723217 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmfqq\" (UniqueName: \"kubernetes.io/projected/c901b468-b8e9-48f8-8050-0d54e24e2adb-kube-api-access-hmfqq\") pod \"csi-snapshot-controller-7577d6f48-wklhr\" (UID: \"c901b468-b8e9-48f8-8050-0d54e24e2adb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 21:57:33.743702 master-0 kubenswrapper[7480]: I0308 21:57:33.743653 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 21:57:33.781682 master-0 kubenswrapper[7480]: I0308 21:57:33.781608 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2frj\" (UniqueName: \"kubernetes.io/projected/ba79b6eb-0db6-43a4-abd9-8fc35066c103-kube-api-access-n2frj\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781698 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781735 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781759 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781784 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781799 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781865 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781893 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782mx\" (UniqueName: \"kubernetes.io/projected/84b193de-34da-49b6-bf13-7b97399e7d07-kube-api-access-782mx\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.781907 master-0 kubenswrapper[7480]: I0308 21:57:33.781910 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.782271 master-0 kubenswrapper[7480]: E0308 21:57:33.782223 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 08 21:57:33.782310 master-0 kubenswrapper[7480]: E0308 21:57:33.782302 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.282280722 +0000 UTC m=+4.735901324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "config" not found Mar 08 21:57:33.782872 master-0 kubenswrapper[7480]: E0308 21:57:33.782455 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:33.782872 master-0 kubenswrapper[7480]: E0308 21:57:33.782547 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.282526828 +0000 UTC m=+4.736147430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "client-ca" not found Mar 08 21:57:33.782872 master-0 kubenswrapper[7480]: E0308 21:57:33.782622 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 08 21:57:33.782872 master-0 kubenswrapper[7480]: E0308 21:57:33.782650 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.282642661 +0000 UTC m=+4.736263263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "openshift-global-ca" not found Mar 08 21:57:33.782872 master-0 kubenswrapper[7480]: E0308 21:57:33.782702 7480 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:33.782872 master-0 kubenswrapper[7480]: E0308 21:57:33.782723 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.282716513 +0000 UTC m=+4.736337115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : secret "serving-cert" not found Mar 08 21:57:33.821976 master-0 kubenswrapper[7480]: I0308 21:57:33.821914 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782mx\" (UniqueName: \"kubernetes.io/projected/84b193de-34da-49b6-bf13-7b97399e7d07-kube-api-access-782mx\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:33.883483 master-0 kubenswrapper[7480]: I0308 21:57:33.883406 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2frj\" (UniqueName: \"kubernetes.io/projected/ba79b6eb-0db6-43a4-abd9-8fc35066c103-kube-api-access-n2frj\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.883483 master-0 kubenswrapper[7480]: I0308 21:57:33.883473 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.883782 master-0 kubenswrapper[7480]: E0308 21:57:33.883538 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:33.883782 master-0 kubenswrapper[7480]: E0308 21:57:33.883589 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.383574784 +0000 UTC m=+4.837195386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : configmap "client-ca" not found Mar 08 21:57:33.883782 master-0 kubenswrapper[7480]: I0308 21:57:33.883625 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.883782 master-0 kubenswrapper[7480]: I0308 21:57:33.883651 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.883782 master-0 kubenswrapper[7480]: E0308 21:57:33.883766 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:33.883782 master-0 kubenswrapper[7480]: E0308 21:57:33.883787 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.38378005 +0000 UTC m=+4.837400652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : secret "serving-cert" not found Mar 08 21:57:33.884024 master-0 kubenswrapper[7480]: E0308 21:57:33.883808 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 08 21:57:33.884024 master-0 kubenswrapper[7480]: E0308 21:57:33.883826 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:34.383821031 +0000 UTC m=+4.837441633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : configmap "config" not found Mar 08 21:57:33.910754 master-0 kubenswrapper[7480]: I0308 21:57:33.910697 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2frj\" (UniqueName: \"kubernetes.io/projected/ba79b6eb-0db6-43a4-abd9-8fc35066c103-kube-api-access-n2frj\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:33.940637 master-0 kubenswrapper[7480]: I0308 21:57:33.940474 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:33.940637 master-0 kubenswrapper[7480]: I0308 21:57:33.940506 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:34.227314 master-0 kubenswrapper[7480]: I0308 21:57:34.226598 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws"] Mar 08 21:57:34.238153 master-0 kubenswrapper[7480]: W0308 21:57:34.238057 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0feb73_2ef6_4083_81ce_82a1394ce9c4.slice/crio-03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669 WatchSource:0}: Error finding container 03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669: Status 404 returned error can't find the container with id 03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669 Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: I0308 21:57:34.290472 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: I0308 21:57:34.290937 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: I0308 21:57:34.290966 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: I0308 21:57:34.291006 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.290747 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291183 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.291158718 +0000 UTC m=+5.744779320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "openshift-global-ca" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291237 7480 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291330 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.291305812 +0000 UTC m=+5.744926414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : secret "serving-cert" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291347 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291436 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.291412095 +0000 UTC m=+5.745032697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "config" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291466 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:34.291481 master-0 kubenswrapper[7480]: E0308 21:57:34.291490 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.291483346 +0000 UTC m=+5.745103948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "client-ca" not found Mar 08 21:57:34.296452 master-0 kubenswrapper[7480]: I0308 21:57:34.296412 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr"] Mar 08 21:57:34.323313 master-0 kubenswrapper[7480]: W0308 21:57:34.323271 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc901b468_b8e9_48f8_8050_0d54e24e2adb.slice/crio-5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c WatchSource:0}: Error finding container 5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c: Status 404 returned error can't find the container with id 5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c Mar 08 21:57:34.392600 master-0 kubenswrapper[7480]: I0308 21:57:34.392476 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: E0308 21:57:34.392683 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: E0308 21:57:34.392836 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.392799869 +0000 UTC m=+5.846420511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : configmap "client-ca" not found Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: I0308 21:57:34.393051 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: I0308 21:57:34.393176 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: E0308 21:57:34.393325 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: E0308 21:57:34.393429 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.393403706 +0000 UTC m=+5.847024318 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : configmap "config" not found Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: E0308 21:57:34.393505 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:34.393674 master-0 kubenswrapper[7480]: E0308 21:57:34.393565 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:35.393549359 +0000 UTC m=+5.847169971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : secret "serving-cert" not found Mar 08 21:57:34.494057 master-0 kubenswrapper[7480]: I0308 21:57:34.493831 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:34.494057 master-0 kubenswrapper[7480]: I0308 21:57:34.493907 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:34.494057 master-0 kubenswrapper[7480]: I0308 21:57:34.493933 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:34.494057 master-0 kubenswrapper[7480]: E0308 21:57:34.494054 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:34.494057 master-0 kubenswrapper[7480]: E0308 21:57:34.494097 7480 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: E0308 21:57:34.494134 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.494116262 +0000 UTC m=+8.947736874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: E0308 21:57:34.494252 7480 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: E0308 21:57:34.494349 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.494295328 +0000 UTC m=+8.947915960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: I0308 21:57:34.494516 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: I0308 21:57:34.494569 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: E0308 21:57:34.494613 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.494580595 +0000 UTC m=+8.948201427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:34.494631 master-0 kubenswrapper[7480]: E0308 21:57:34.494643 7480 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: E0308 21:57:34.494676 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.494661577 +0000 UTC m=+8.948282179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: E0308 21:57:34.494616 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: E0308 21:57:34.494705 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.494700118 +0000 UTC m=+8.948320720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.494741 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.494813 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.494925 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.495013 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.495128 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.495184 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.495235 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:34.495365 master-0 kubenswrapper[7480]: I0308 21:57:34.495290 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495523 7480 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495589 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.495574441 +0000 UTC m=+8.949195083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495663 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495699 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.495686904 +0000 UTC m=+8.949307546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "node-tuning-operator-tls" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495760 7480 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495800 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.495787066 +0000 UTC m=+8.949407708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495858 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495890 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.495879379 +0000 UTC m=+8.949500021 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.495990 7480 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.496045 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.496031203 +0000 UTC m=+8.949651845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.496158 7480 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.496194 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert podName:2851c096-f5cb-4a46-a5a0-ac0b1341033b nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.496182636 +0000 UTC m=+8.949803268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4lpf" (UID: "2851c096-f5cb-4a46-a5a0-ac0b1341033b") : secret "performance-addon-operator-webhook-cert" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.496226 7480 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:34.496240 master-0 kubenswrapper[7480]: E0308 21:57:34.496256 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert podName:d287e2ca-f134-4e34-96f7-50a3055ee119 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.496248108 +0000 UTC m=+8.949868710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert") pod "cluster-version-operator-745944c6b7-d8fd8" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119") : secret "cluster-version-operator-serving-cert" not found Mar 08 21:57:34.496781 master-0 kubenswrapper[7480]: E0308 21:57:34.496261 7480 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 21:57:34.496781 master-0 kubenswrapper[7480]: E0308 21:57:34.496312 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.496300609 +0000 UTC m=+8.949921251 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : secret "metrics-daemon-secret" not found Mar 08 21:57:34.949641 master-0 kubenswrapper[7480]: I0308 21:57:34.949552 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c"} Mar 08 21:57:34.954619 master-0 kubenswrapper[7480]: I0308 21:57:34.951254 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" event={"ID":"0d0feb73-2ef6-4083-81ce-82a1394ce9c4","Type":"ContainerStarted","Data":"03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669"} Mar 08 21:57:34.954619 master-0 kubenswrapper[7480]: I0308 21:57:34.953210 7480 generic.go:334] "Generic (PLEG): container finished" podID="de89c423-0f2a-440f-9fa9-92fefea84b09" containerID="72b0e6a3cc3f97f5e2663934796c3814c98efd81ba66b9d9762bd04c86de3111" exitCode=0 Mar 08 21:57:34.954619 master-0 kubenswrapper[7480]: I0308 21:57:34.953384 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:34.954619 master-0 kubenswrapper[7480]: I0308 21:57:34.954360 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerDied","Data":"72b0e6a3cc3f97f5e2663934796c3814c98efd81ba66b9d9762bd04c86de3111"} Mar 08 21:57:35.161568 master-0 kubenswrapper[7480]: I0308 21:57:35.160447 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4"] Mar 08 21:57:35.161568 master-0 kubenswrapper[7480]: E0308 21:57:35.160730 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" podUID="84b193de-34da-49b6-bf13-7b97399e7d07" Mar 08 21:57:35.179486 master-0 kubenswrapper[7480]: I0308 21:57:35.178921 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7"] Mar 08 21:57:35.179486 master-0 kubenswrapper[7480]: E0308 21:57:35.179257 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" podUID="ba79b6eb-0db6-43a4-abd9-8fc35066c103" Mar 08 21:57:35.274010 master-0 kubenswrapper[7480]: I0308 21:57:35.273940 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-b8zkz"] Mar 08 21:57:35.274725 master-0 kubenswrapper[7480]: I0308 21:57:35.274694 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.277881 master-0 kubenswrapper[7480]: I0308 21:57:35.277808 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 21:57:35.280409 master-0 kubenswrapper[7480]: I0308 21:57:35.280348 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 21:57:35.281551 master-0 kubenswrapper[7480]: I0308 21:57:35.281441 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 21:57:35.281551 master-0 kubenswrapper[7480]: I0308 21:57:35.281449 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 21:57:35.285989 master-0 kubenswrapper[7480]: I0308 21:57:35.285664 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-b8zkz"] Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: I0308 21:57:35.324716 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: I0308 21:57:35.324774 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: I0308 21:57:35.324802 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: I0308 21:57:35.324826 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: E0308 21:57:35.324949 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: E0308 21:57:35.325015 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:37.324997448 +0000 UTC m=+7.778618060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : configmap "client-ca" not found Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: E0308 21:57:35.325126 7480 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: E0308 21:57:35.325220 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert podName:84b193de-34da-49b6-bf13-7b97399e7d07 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:37.325200443 +0000 UTC m=+7.778821045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert") pod "controller-manager-6f7fd6c796-7f7b4" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07") : secret "serving-cert" not found Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: I0308 21:57:35.326026 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.326719 master-0 kubenswrapper[7480]: I0308 21:57:35.326634 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-7f7b4\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: E0308 21:57:35.425995 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: E0308 21:57:35.426119 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:37.426095335 +0000 UTC m=+7.879715957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : configmap "client-ca" not found Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: I0308 21:57:35.425904 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: I0308 21:57:35.426457 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-cabundle\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: I0308 21:57:35.426547 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ht4t\" (UniqueName: \"kubernetes.io/projected/e8ef68b9-6f8d-4697-b269-91ee4e310752-kube-api-access-6ht4t\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: I0308 21:57:35.426618 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: I0308 21:57:35.426657 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: E0308 21:57:35.426765 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: E0308 21:57:35.426796 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert podName:ba79b6eb-0db6-43a4-abd9-8fc35066c103 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:37.426785704 +0000 UTC m=+7.880406316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert") pod "route-controller-manager-58959cd4d6-k8jj7" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103") : secret "serving-cert" not found Mar 08 21:57:35.426918 master-0 kubenswrapper[7480]: I0308 21:57:35.426819 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-key\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.427909 master-0 kubenswrapper[7480]: I0308 21:57:35.427880 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") pod \"route-controller-manager-58959cd4d6-k8jj7\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:35.527624 master-0 kubenswrapper[7480]: I0308 21:57:35.527456 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-cabundle\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.527624 master-0 kubenswrapper[7480]: I0308 21:57:35.527537 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ht4t\" (UniqueName: \"kubernetes.io/projected/e8ef68b9-6f8d-4697-b269-91ee4e310752-kube-api-access-6ht4t\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.527989 master-0 kubenswrapper[7480]: I0308 21:57:35.527660 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-key\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.530407 master-0 kubenswrapper[7480]: I0308 21:57:35.528812 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-cabundle\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.563182 master-0 kubenswrapper[7480]: I0308 21:57:35.537903 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-key\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.563182 master-0 kubenswrapper[7480]: I0308 21:57:35.553901 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ht4t\" (UniqueName: \"kubernetes.io/projected/e8ef68b9-6f8d-4697-b269-91ee4e310752-kube-api-access-6ht4t\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.639378 master-0 kubenswrapper[7480]: I0308 21:57:35.639265 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 21:57:35.882020 master-0 kubenswrapper[7480]: I0308 21:57:35.881968 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-b8zkz"] Mar 08 21:57:35.960858 master-0 kubenswrapper[7480]: I0308 21:57:35.960379 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerStarted","Data":"5606fcf795565b19c0d649668bacd0041a38e917c804757278c207fde8081155"} Mar 08 21:57:35.982111 master-0 kubenswrapper[7480]: I0308 21:57:35.961120 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:35.982111 master-0 kubenswrapper[7480]: I0308 21:57:35.967756 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:35.982111 master-0 kubenswrapper[7480]: I0308 21:57:35.967947 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerStarted","Data":"65b211739156dcea6c9fedd48dbe1e6cb8361762b8f9a787cf0192fa0b5059a7"} Mar 08 21:57:35.982111 master-0 kubenswrapper[7480]: I0308 21:57:35.968307 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:35.991413 master-0 kubenswrapper[7480]: I0308 21:57:35.991362 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:35.999187 master-0 kubenswrapper[7480]: I0308 21:57:35.999149 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:36.136688 master-0 kubenswrapper[7480]: I0308 21:57:36.136397 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-782mx\" (UniqueName: \"kubernetes.io/projected/84b193de-34da-49b6-bf13-7b97399e7d07-kube-api-access-782mx\") pod \"84b193de-34da-49b6-bf13-7b97399e7d07\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " Mar 08 21:57:36.136688 master-0 kubenswrapper[7480]: I0308 21:57:36.136496 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") pod \"84b193de-34da-49b6-bf13-7b97399e7d07\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " Mar 08 21:57:36.136688 master-0 kubenswrapper[7480]: I0308 21:57:36.136548 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") pod \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " Mar 08 21:57:36.136688 master-0 kubenswrapper[7480]: I0308 21:57:36.136578 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2frj\" (UniqueName: \"kubernetes.io/projected/ba79b6eb-0db6-43a4-abd9-8fc35066c103-kube-api-access-n2frj\") pod \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\" (UID: \"ba79b6eb-0db6-43a4-abd9-8fc35066c103\") " Mar 08 21:57:36.136688 master-0 kubenswrapper[7480]: I0308 21:57:36.136657 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") pod \"84b193de-34da-49b6-bf13-7b97399e7d07\" (UID: \"84b193de-34da-49b6-bf13-7b97399e7d07\") " Mar 08 21:57:36.138795 master-0 kubenswrapper[7480]: I0308 21:57:36.138728 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config" (OuterVolumeSpecName: "config") pod "84b193de-34da-49b6-bf13-7b97399e7d07" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:36.139842 master-0 kubenswrapper[7480]: I0308 21:57:36.139283 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:36.139842 master-0 kubenswrapper[7480]: I0308 21:57:36.139427 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "84b193de-34da-49b6-bf13-7b97399e7d07" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:36.144372 master-0 kubenswrapper[7480]: I0308 21:57:36.143590 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config" (OuterVolumeSpecName: "config") pod "ba79b6eb-0db6-43a4-abd9-8fc35066c103" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:36.146529 master-0 kubenswrapper[7480]: I0308 21:57:36.146422 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b193de-34da-49b6-bf13-7b97399e7d07-kube-api-access-782mx" (OuterVolumeSpecName: "kube-api-access-782mx") pod "84b193de-34da-49b6-bf13-7b97399e7d07" (UID: "84b193de-34da-49b6-bf13-7b97399e7d07"). InnerVolumeSpecName "kube-api-access-782mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:36.147436 master-0 kubenswrapper[7480]: I0308 21:57:36.147390 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba79b6eb-0db6-43a4-abd9-8fc35066c103-kube-api-access-n2frj" (OuterVolumeSpecName: "kube-api-access-n2frj") pod "ba79b6eb-0db6-43a4-abd9-8fc35066c103" (UID: "ba79b6eb-0db6-43a4-abd9-8fc35066c103"). InnerVolumeSpecName "kube-api-access-n2frj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:36.240844 master-0 kubenswrapper[7480]: I0308 21:57:36.240797 7480 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:36.240844 master-0 kubenswrapper[7480]: I0308 21:57:36.240836 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-782mx\" (UniqueName: \"kubernetes.io/projected/84b193de-34da-49b6-bf13-7b97399e7d07-kube-api-access-782mx\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:36.240844 master-0 kubenswrapper[7480]: I0308 21:57:36.240849 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:36.240844 master-0 kubenswrapper[7480]: I0308 21:57:36.240859 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2frj\" (UniqueName: \"kubernetes.io/projected/ba79b6eb-0db6-43a4-abd9-8fc35066c103-kube-api-access-n2frj\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:36.811453 master-0 kubenswrapper[7480]: I0308 21:57:36.811383 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:36.811735 master-0 kubenswrapper[7480]: I0308 21:57:36.811502 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:36.828996 master-0 kubenswrapper[7480]: I0308 21:57:36.828911 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:36.976113 master-0 kubenswrapper[7480]: I0308 21:57:36.976057 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerStarted","Data":"3724b6db595f74186edc6baea18527f6eae9fe894eef0ca447fc3a5e5c129bfc"} Mar 08 21:57:36.976959 master-0 kubenswrapper[7480]: I0308 21:57:36.976943 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7" Mar 08 21:57:36.977286 master-0 kubenswrapper[7480]: I0308 21:57:36.977231 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4" Mar 08 21:57:37.026686 master-0 kubenswrapper[7480]: I0308 21:57:37.026606 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" podStartSLOduration=2.026579372 podStartE2EDuration="2.026579372s" podCreationTimestamp="2026-03-08 21:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:37.01150376 +0000 UTC m=+7.465124362" watchObservedRunningTime="2026-03-08 21:57:37.026579372 +0000 UTC m=+7.480199974" Mar 08 21:57:37.073717 master-0 kubenswrapper[7480]: I0308 21:57:37.073661 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7"] Mar 08 21:57:37.088717 master-0 kubenswrapper[7480]: I0308 21:57:37.088663 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-k8jj7"] Mar 08 21:57:37.110758 master-0 kubenswrapper[7480]: I0308 21:57:37.109323 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn"] Mar 08 21:57:37.110758 master-0 kubenswrapper[7480]: I0308 21:57:37.109942 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.125716 master-0 kubenswrapper[7480]: I0308 21:57:37.125674 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 21:57:37.138245 master-0 kubenswrapper[7480]: I0308 21:57:37.138193 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 21:57:37.138670 master-0 kubenswrapper[7480]: I0308 21:57:37.138556 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:37.138907 master-0 kubenswrapper[7480]: I0308 21:57:37.138868 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 21:57:37.139065 master-0 kubenswrapper[7480]: I0308 21:57:37.139019 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:37.155133 master-0 kubenswrapper[7480]: I0308 21:57:37.148935 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn"] Mar 08 21:57:37.168743 master-0 kubenswrapper[7480]: I0308 21:57:37.168670 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-config\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.168743 master-0 kubenswrapper[7480]: I0308 21:57:37.168728 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tn4g\" (UniqueName: \"kubernetes.io/projected/838f81b9-0423-437e-88ed-88eebfe4c188-kube-api-access-4tn4g\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.168743 master-0 kubenswrapper[7480]: I0308 21:57:37.168753 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.168994 master-0 kubenswrapper[7480]: I0308 21:57:37.168778 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.168994 master-0 kubenswrapper[7480]: I0308 21:57:37.168893 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79b6eb-0db6-43a4-abd9-8fc35066c103-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:37.168994 master-0 kubenswrapper[7480]: I0308 21:57:37.168911 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba79b6eb-0db6-43a4-abd9-8fc35066c103-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:37.179970 master-0 kubenswrapper[7480]: I0308 21:57:37.179888 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4"] Mar 08 21:57:37.188797 master-0 kubenswrapper[7480]: I0308 21:57:37.188742 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-7f7b4"] Mar 08 21:57:37.270398 master-0 kubenswrapper[7480]: I0308 21:57:37.270343 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-config\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.270398 master-0 kubenswrapper[7480]: I0308 21:57:37.270396 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tn4g\" (UniqueName: \"kubernetes.io/projected/838f81b9-0423-437e-88ed-88eebfe4c188-kube-api-access-4tn4g\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.270661 master-0 kubenswrapper[7480]: I0308 21:57:37.270432 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.270795 master-0 kubenswrapper[7480]: I0308 21:57:37.270704 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.270795 master-0 kubenswrapper[7480]: I0308 21:57:37.270793 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b193de-34da-49b6-bf13-7b97399e7d07-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:37.270866 master-0 kubenswrapper[7480]: E0308 21:57:37.270788 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:37.271018 master-0 kubenswrapper[7480]: E0308 21:57:37.270894 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:37.770867442 +0000 UTC m=+8.224488044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : configmap "client-ca" not found Mar 08 21:57:37.271018 master-0 kubenswrapper[7480]: E0308 21:57:37.270912 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:37.271018 master-0 kubenswrapper[7480]: I0308 21:57:37.270805 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b193de-34da-49b6-bf13-7b97399e7d07-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:37.271018 master-0 kubenswrapper[7480]: E0308 21:57:37.270975 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:37.770955484 +0000 UTC m=+8.224576086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : secret "serving-cert" not found Mar 08 21:57:37.272309 master-0 kubenswrapper[7480]: I0308 21:57:37.272280 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-config\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.308172 master-0 kubenswrapper[7480]: I0308 21:57:37.307966 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tn4g\" (UniqueName: \"kubernetes.io/projected/838f81b9-0423-437e-88ed-88eebfe4c188-kube-api-access-4tn4g\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.779164 master-0 kubenswrapper[7480]: I0308 21:57:37.777600 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.779164 master-0 kubenswrapper[7480]: I0308 21:57:37.779172 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:37.781126 master-0 kubenswrapper[7480]: E0308 21:57:37.777925 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:37.781126 master-0 kubenswrapper[7480]: E0308 21:57:37.779418 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.779396728 +0000 UTC m=+9.233017320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : configmap "client-ca" not found Mar 08 21:57:37.781126 master-0 kubenswrapper[7480]: E0308 21:57:37.779348 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:37.781126 master-0 kubenswrapper[7480]: E0308 21:57:37.779491 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:38.779479221 +0000 UTC m=+9.233099823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : secret "serving-cert" not found Mar 08 21:57:37.787921 master-0 kubenswrapper[7480]: I0308 21:57:37.787794 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b193de-34da-49b6-bf13-7b97399e7d07" path="/var/lib/kubelet/pods/84b193de-34da-49b6-bf13-7b97399e7d07/volumes" Mar 08 21:57:37.788726 master-0 kubenswrapper[7480]: I0308 21:57:37.788691 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba79b6eb-0db6-43a4-abd9-8fc35066c103" path="/var/lib/kubelet/pods/ba79b6eb-0db6-43a4-abd9-8fc35066c103/volumes" Mar 08 21:57:37.992330 master-0 kubenswrapper[7480]: I0308 21:57:37.992256 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-pwn9k" event={"ID":"b358dcb7-d01f-4206-b636-b55a599a73bd","Type":"ContainerStarted","Data":"4c93513e2411671b591d80db5767b0a883ed647283a5daee6cc24464557c94b7"} Mar 08 21:57:38.019374 master-0 kubenswrapper[7480]: I0308 21:57:38.019287 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501368 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501422 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501444 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501467 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501489 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501509 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501538 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501564 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501586 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501604 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501623 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501643 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: I0308 21:57:38.501663 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.501787 7480 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.501838 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls podName:84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.501824695 +0000 UTC m=+16.955445297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls") pod "ingress-operator-677db989d6-cjdgr" (UID: "84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed") : secret "metrics-tls" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.502137 7480 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.502175 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics podName:7e0267ba-5dd7-4e81-885f-95b27a7b42ea nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502164123 +0000 UTC m=+16.955784725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-5ljhh" (UID: "7e0267ba-5dd7-4e81-885f-95b27a7b42ea") : secret "marketplace-operator-metrics" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.502239 7480 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.502248 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.502265 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls podName:a913c639-ebfc-42a3-85cd-8a460027d3ec nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502256495 +0000 UTC m=+16.955877097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-g2ddr" (UID: "a913c639-ebfc-42a3-85cd-8a460027d3ec") : secret "image-registry-operator-tls" not found Mar 08 21:57:38.502304 master-0 kubenswrapper[7480]: E0308 21:57:38.502275 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502270206 +0000 UTC m=+16.955890808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502309 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502419 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502396059 +0000 UTC m=+16.956016661 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502461 7480 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502483 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs podName:1dfc8afd-2330-46a4-ae5b-36522102b332 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502476701 +0000 UTC m=+16.956097303 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs") pod "multus-admission-controller-8d675b596-ddw98" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332") : secret "multus-admission-controller-secret" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502516 7480 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502536 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls podName:df48e7e0-0659-48e2-9b6a-32c964ff47b2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502530422 +0000 UTC m=+16.956151024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls") pod "dns-operator-589895fbb7-wtvp5" (UID: "df48e7e0-0659-48e2-9b6a-32c964ff47b2") : secret "metrics-tls" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502565 7480 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502584 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs podName:44e67e41-045e-42ef-8f60-6ef15606d6a2 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502578984 +0000 UTC m=+16.956199586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs") pod "network-metrics-daemon-lqdbv" (UID: "44e67e41-045e-42ef-8f60-6ef15606d6a2") : secret "metrics-daemon-secret" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502621 7480 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502641 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls podName:4ef806a4-5486-43a9-8bfa-b1670c888dc1 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502635775 +0000 UTC m=+16.956256597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-mt484" (UID: "4ef806a4-5486-43a9-8bfa-b1670c888dc1") : secret "cluster-monitoring-operator-tls" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502669 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:38.503031 master-0 kubenswrapper[7480]: E0308 21:57:38.502687 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.502681936 +0000 UTC m=+16.956302538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:38.506852 master-0 kubenswrapper[7480]: I0308 21:57:38.506397 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:38.506852 master-0 kubenswrapper[7480]: I0308 21:57:38.506815 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"cluster-version-operator-745944c6b7-d8fd8\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:38.507410 master-0 kubenswrapper[7480]: I0308 21:57:38.507384 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:38.538099 master-0 kubenswrapper[7480]: I0308 21:57:38.537994 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 21:57:38.539812 master-0 kubenswrapper[7480]: I0308 21:57:38.539599 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:57:38.563938 master-0 kubenswrapper[7480]: I0308 21:57:38.563846 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-754785bd46-4kk54"] Mar 08 21:57:38.563938 master-0 kubenswrapper[7480]: I0308 21:57:38.564488 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.570801 master-0 kubenswrapper[7480]: I0308 21:57:38.568875 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 21:57:38.570801 master-0 kubenswrapper[7480]: I0308 21:57:38.569188 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:38.570801 master-0 kubenswrapper[7480]: I0308 21:57:38.569272 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:38.570801 master-0 kubenswrapper[7480]: I0308 21:57:38.569479 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 21:57:38.570801 master-0 kubenswrapper[7480]: I0308 21:57:38.569674 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 21:57:38.581846 master-0 kubenswrapper[7480]: I0308 21:57:38.581567 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754785bd46-4kk54"] Mar 08 21:57:38.587741 master-0 kubenswrapper[7480]: I0308 21:57:38.587695 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 21:57:38.602644 master-0 kubenswrapper[7480]: I0308 21:57:38.602561 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsq7\" (UniqueName: \"kubernetes.io/projected/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-kube-api-access-vtsq7\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.602929 master-0 kubenswrapper[7480]: I0308 21:57:38.602766 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-config\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.602929 master-0 kubenswrapper[7480]: I0308 21:57:38.602896 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-serving-cert\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.602929 master-0 kubenswrapper[7480]: I0308 21:57:38.602924 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.603056 master-0 kubenswrapper[7480]: I0308 21:57:38.602950 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-proxy-ca-bundles\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.704813 master-0 kubenswrapper[7480]: I0308 21:57:38.704292 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-config\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.705252 master-0 kubenswrapper[7480]: I0308 21:57:38.704907 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-serving-cert\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.705252 master-0 kubenswrapper[7480]: I0308 21:57:38.704939 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.705252 master-0 kubenswrapper[7480]: I0308 21:57:38.704970 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-proxy-ca-bundles\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.705381 master-0 kubenswrapper[7480]: E0308 21:57:38.705350 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:38.705453 master-0 kubenswrapper[7480]: E0308 21:57:38.705437 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca podName:c0c7c5e3-7bbb-4a43-8202-5603a869aee6 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:39.205392665 +0000 UTC m=+9.659013267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca") pod "controller-manager-754785bd46-4kk54" (UID: "c0c7c5e3-7bbb-4a43-8202-5603a869aee6") : configmap "client-ca" not found Mar 08 21:57:38.706230 master-0 kubenswrapper[7480]: I0308 21:57:38.706180 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsq7\" (UniqueName: \"kubernetes.io/projected/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-kube-api-access-vtsq7\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.706439 master-0 kubenswrapper[7480]: I0308 21:57:38.706406 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-config\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.707848 master-0 kubenswrapper[7480]: I0308 21:57:38.707369 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-proxy-ca-bundles\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.709618 master-0 kubenswrapper[7480]: I0308 21:57:38.709570 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-serving-cert\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.725585 master-0 kubenswrapper[7480]: I0308 21:57:38.725523 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsq7\" (UniqueName: \"kubernetes.io/projected/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-kube-api-access-vtsq7\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:38.808911 master-0 kubenswrapper[7480]: I0308 21:57:38.808760 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:38.808911 master-0 kubenswrapper[7480]: I0308 21:57:38.808833 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:38.809187 master-0 kubenswrapper[7480]: E0308 21:57:38.808980 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:38.809187 master-0 kubenswrapper[7480]: E0308 21:57:38.809125 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:40.80909772 +0000 UTC m=+11.262718342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : configmap "client-ca" not found Mar 08 21:57:38.809272 master-0 kubenswrapper[7480]: E0308 21:57:38.809193 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:38.809309 master-0 kubenswrapper[7480]: E0308 21:57:38.809276 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:40.809250724 +0000 UTC m=+11.262871516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : secret "serving-cert" not found Mar 08 21:57:38.943108 master-0 kubenswrapper[7480]: I0308 21:57:38.943017 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754785bd46-4kk54"] Mar 08 21:57:38.943372 master-0 kubenswrapper[7480]: E0308 21:57:38.943296 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" podUID="c0c7c5e3-7bbb-4a43-8202-5603a869aee6" Mar 08 21:57:39.001337 master-0 kubenswrapper[7480]: I0308 21:57:39.001240 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:39.010881 master-0 kubenswrapper[7480]: I0308 21:57:39.008762 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:39.030495 master-0 kubenswrapper[7480]: I0308 21:57:39.030453 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:39.036878 master-0 kubenswrapper[7480]: I0308 21:57:39.036829 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:39.043384 master-0 kubenswrapper[7480]: W0308 21:57:39.043218 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd287e2ca_f134_4e34_96f7_50a3055ee119.slice/crio-2d305e2126da2df672b5029a4e5d93937d2fb815ad69e0ad77e8d2f95bf5f7ba WatchSource:0}: Error finding container 2d305e2126da2df672b5029a4e5d93937d2fb815ad69e0ad77e8d2f95bf5f7ba: Status 404 returned error can't find the container with id 2d305e2126da2df672b5029a4e5d93937d2fb815ad69e0ad77e8d2f95bf5f7ba Mar 08 21:57:39.112764 master-0 kubenswrapper[7480]: I0308 21:57:39.112707 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-serving-cert\") pod \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " Mar 08 21:57:39.112764 master-0 kubenswrapper[7480]: I0308 21:57:39.112768 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-proxy-ca-bundles\") pod \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " Mar 08 21:57:39.113023 master-0 kubenswrapper[7480]: I0308 21:57:39.112822 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-config\") pod \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " Mar 08 21:57:39.113023 master-0 kubenswrapper[7480]: I0308 21:57:39.112877 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtsq7\" (UniqueName: \"kubernetes.io/projected/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-kube-api-access-vtsq7\") pod \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " Mar 08 21:57:39.116305 master-0 kubenswrapper[7480]: I0308 21:57:39.115470 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c0c7c5e3-7bbb-4a43-8202-5603a869aee6" (UID: "c0c7c5e3-7bbb-4a43-8202-5603a869aee6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:39.124130 master-0 kubenswrapper[7480]: I0308 21:57:39.118793 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-kube-api-access-vtsq7" (OuterVolumeSpecName: "kube-api-access-vtsq7") pod "c0c7c5e3-7bbb-4a43-8202-5603a869aee6" (UID: "c0c7c5e3-7bbb-4a43-8202-5603a869aee6"). InnerVolumeSpecName "kube-api-access-vtsq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:39.124130 master-0 kubenswrapper[7480]: I0308 21:57:39.119264 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-config" (OuterVolumeSpecName: "config") pod "c0c7c5e3-7bbb-4a43-8202-5603a869aee6" (UID: "c0c7c5e3-7bbb-4a43-8202-5603a869aee6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:39.128466 master-0 kubenswrapper[7480]: I0308 21:57:39.126674 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0c7c5e3-7bbb-4a43-8202-5603a869aee6" (UID: "c0c7c5e3-7bbb-4a43-8202-5603a869aee6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:39.215320 master-0 kubenswrapper[7480]: I0308 21:57:39.214463 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca\") pod \"controller-manager-754785bd46-4kk54\" (UID: \"c0c7c5e3-7bbb-4a43-8202-5603a869aee6\") " pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:39.215505 master-0 kubenswrapper[7480]: I0308 21:57:39.215368 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:39.215505 master-0 kubenswrapper[7480]: E0308 21:57:39.214930 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:39.215505 master-0 kubenswrapper[7480]: I0308 21:57:39.215420 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtsq7\" (UniqueName: \"kubernetes.io/projected/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-kube-api-access-vtsq7\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:39.215753 master-0 kubenswrapper[7480]: E0308 21:57:39.215521 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca podName:c0c7c5e3-7bbb-4a43-8202-5603a869aee6 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:40.215487043 +0000 UTC m=+10.669107645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca") pod "controller-manager-754785bd46-4kk54" (UID: "c0c7c5e3-7bbb-4a43-8202-5603a869aee6") : configmap "client-ca" not found Mar 08 21:57:39.215753 master-0 kubenswrapper[7480]: I0308 21:57:39.215573 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:39.215753 master-0 kubenswrapper[7480]: I0308 21:57:39.215592 7480 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:39.236108 master-0 kubenswrapper[7480]: I0308 21:57:39.236037 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf"] Mar 08 21:57:39.304532 master-0 kubenswrapper[7480]: W0308 21:57:39.304455 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2851c096_f5cb_4a46_a5a0_ac0b1341033b.slice/crio-d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e WatchSource:0}: Error finding container d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e: Status 404 returned error can't find the container with id d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e Mar 08 21:57:40.006482 master-0 kubenswrapper[7480]: I0308 21:57:40.006337 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" event={"ID":"d287e2ca-f134-4e34-96f7-50a3055ee119","Type":"ContainerStarted","Data":"2d305e2126da2df672b5029a4e5d93937d2fb815ad69e0ad77e8d2f95bf5f7ba"} Mar 08 21:57:40.007884 master-0 kubenswrapper[7480]: I0308 21:57:40.007856 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerStarted","Data":"d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e"} Mar 08 21:57:40.010005 master-0 kubenswrapper[7480]: I0308 21:57:40.009939 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"975d86808356450f32e152ee3c49e6ab2d8f04281755488f22f0b7506389bb2d"} Mar 08 21:57:40.013634 master-0 kubenswrapper[7480]: I0308 21:57:40.013600 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" event={"ID":"0d0feb73-2ef6-4083-81ce-82a1394ce9c4","Type":"ContainerStarted","Data":"7f5513a7ffe922d5291ba08489744871d2c54bef0e5d4ccf76a9ea9b9fb96ca1"} Mar 08 21:57:40.013634 master-0 kubenswrapper[7480]: I0308 21:57:40.013629 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" event={"ID":"0d0feb73-2ef6-4083-81ce-82a1394ce9c4","Type":"ContainerStarted","Data":"6bd6078c00ce19f9ca7d9c5af9e05dbf9ff45aa8af12f0b8ff8b3ca02782674f"} Mar 08 21:57:40.018330 master-0 kubenswrapper[7480]: I0308 21:57:40.018291 7480 generic.go:334] "Generic (PLEG): container finished" podID="de89c423-0f2a-440f-9fa9-92fefea84b09" containerID="524292da38fe899d291d24e77e4f5efb26dbdfacb31c02270a11c8d9d08d5284" exitCode=0 Mar 08 21:57:40.018597 master-0 kubenswrapper[7480]: I0308 21:57:40.018543 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerDied","Data":"524292da38fe899d291d24e77e4f5efb26dbdfacb31c02270a11c8d9d08d5284"} Mar 08 21:57:40.018690 master-0 kubenswrapper[7480]: I0308 21:57:40.018657 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754785bd46-4kk54" Mar 08 21:57:40.023871 master-0 kubenswrapper[7480]: I0308 21:57:40.023829 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:57:40.043478 master-0 kubenswrapper[7480]: I0308 21:57:40.043372 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podStartSLOduration=2.335616974 podStartE2EDuration="7.043337959s" podCreationTimestamp="2026-03-08 21:57:33 +0000 UTC" firstStartedPulling="2026-03-08 21:57:34.325655644 +0000 UTC m=+4.779276246" lastFinishedPulling="2026-03-08 21:57:39.033376629 +0000 UTC m=+9.486997231" observedRunningTime="2026-03-08 21:57:40.024080708 +0000 UTC m=+10.477701320" watchObservedRunningTime="2026-03-08 21:57:40.043337959 +0000 UTC m=+10.496958581" Mar 08 21:57:40.043680 master-0 kubenswrapper[7480]: I0308 21:57:40.043636 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" podStartSLOduration=3.256212648 podStartE2EDuration="8.043628066s" podCreationTimestamp="2026-03-08 21:57:32 +0000 UTC" firstStartedPulling="2026-03-08 21:57:34.244628868 +0000 UTC m=+4.698249470" lastFinishedPulling="2026-03-08 21:57:39.032044286 +0000 UTC m=+9.485664888" observedRunningTime="2026-03-08 21:57:40.042270021 +0000 UTC m=+10.495890633" watchObservedRunningTime="2026-03-08 21:57:40.043628066 +0000 UTC m=+10.497248688" Mar 08 21:57:40.127970 master-0 kubenswrapper[7480]: I0308 21:57:40.127916 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4db5b54-m7wjt"] Mar 08 21:57:40.128586 master-0 kubenswrapper[7480]: I0308 21:57:40.128560 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754785bd46-4kk54"] Mar 08 21:57:40.128677 master-0 kubenswrapper[7480]: I0308 21:57:40.128659 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.132256 master-0 kubenswrapper[7480]: I0308 21:57:40.131931 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:40.132256 master-0 kubenswrapper[7480]: I0308 21:57:40.131946 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:40.132508 master-0 kubenswrapper[7480]: I0308 21:57:40.132295 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 21:57:40.133447 master-0 kubenswrapper[7480]: I0308 21:57:40.132549 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 21:57:40.133447 master-0 kubenswrapper[7480]: I0308 21:57:40.132747 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 21:57:40.135583 master-0 kubenswrapper[7480]: I0308 21:57:40.134710 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-754785bd46-4kk54"] Mar 08 21:57:40.135713 master-0 kubenswrapper[7480]: I0308 21:57:40.135573 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4db5b54-m7wjt"] Mar 08 21:57:40.141878 master-0 kubenswrapper[7480]: I0308 21:57:40.141837 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 21:57:40.229469 master-0 kubenswrapper[7480]: I0308 21:57:40.229402 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.229683 master-0 kubenswrapper[7480]: I0308 21:57:40.229488 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-config\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.229751 master-0 kubenswrapper[7480]: I0308 21:57:40.229699 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddab3a6-1c13-4476-abc5-1c65301ae173-serving-cert\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.229939 master-0 kubenswrapper[7480]: I0308 21:57:40.229912 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-proxy-ca-bundles\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.230020 master-0 kubenswrapper[7480]: I0308 21:57:40.230001 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wggcz\" (UniqueName: \"kubernetes.io/projected/1ddab3a6-1c13-4476-abc5-1c65301ae173-kube-api-access-wggcz\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.230159 master-0 kubenswrapper[7480]: I0308 21:57:40.230131 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0c7c5e3-7bbb-4a43-8202-5603a869aee6-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:40.331097 master-0 kubenswrapper[7480]: I0308 21:57:40.331033 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-proxy-ca-bundles\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.331272 master-0 kubenswrapper[7480]: I0308 21:57:40.331170 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wggcz\" (UniqueName: \"kubernetes.io/projected/1ddab3a6-1c13-4476-abc5-1c65301ae173-kube-api-access-wggcz\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.331660 master-0 kubenswrapper[7480]: I0308 21:57:40.331593 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.331853 master-0 kubenswrapper[7480]: E0308 21:57:40.331810 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:40.331912 master-0 kubenswrapper[7480]: I0308 21:57:40.331864 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-config\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.331948 master-0 kubenswrapper[7480]: E0308 21:57:40.331936 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca podName:1ddab3a6-1c13-4476-abc5-1c65301ae173 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:40.831896129 +0000 UTC m=+11.285516961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca") pod "controller-manager-c4db5b54-m7wjt" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173") : configmap "client-ca" not found Mar 08 21:57:40.331991 master-0 kubenswrapper[7480]: I0308 21:57:40.331973 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddab3a6-1c13-4476-abc5-1c65301ae173-serving-cert\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.333119 master-0 kubenswrapper[7480]: I0308 21:57:40.333064 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-config\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.335086 master-0 kubenswrapper[7480]: I0308 21:57:40.335051 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-proxy-ca-bundles\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.343369 master-0 kubenswrapper[7480]: I0308 21:57:40.343289 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddab3a6-1c13-4476-abc5-1c65301ae173-serving-cert\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.349524 master-0 kubenswrapper[7480]: I0308 21:57:40.349469 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wggcz\" (UniqueName: \"kubernetes.io/projected/1ddab3a6-1c13-4476-abc5-1c65301ae173-kube-api-access-wggcz\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.842062 master-0 kubenswrapper[7480]: I0308 21:57:40.841975 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:40.842062 master-0 kubenswrapper[7480]: I0308 21:57:40.842060 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:40.842435 master-0 kubenswrapper[7480]: I0308 21:57:40.842120 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:40.842435 master-0 kubenswrapper[7480]: E0308 21:57:40.842352 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:40.842503 master-0 kubenswrapper[7480]: E0308 21:57:40.842452 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:44.842431558 +0000 UTC m=+15.296052160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : secret "serving-cert" not found Mar 08 21:57:40.842977 master-0 kubenswrapper[7480]: E0308 21:57:40.842717 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:40.842977 master-0 kubenswrapper[7480]: E0308 21:57:40.842815 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:44.842794067 +0000 UTC m=+15.296414669 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : configmap "client-ca" not found Mar 08 21:57:40.842977 master-0 kubenswrapper[7480]: E0308 21:57:40.842906 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:40.842977 master-0 kubenswrapper[7480]: E0308 21:57:40.842952 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca podName:1ddab3a6-1c13-4476-abc5-1c65301ae173 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:41.842941981 +0000 UTC m=+12.296562793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca") pod "controller-manager-c4db5b54-m7wjt" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173") : configmap "client-ca" not found Mar 08 21:57:41.145231 master-0 kubenswrapper[7480]: I0308 21:57:41.144804 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:41.145231 master-0 kubenswrapper[7480]: I0308 21:57:41.145041 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:41.145231 master-0 kubenswrapper[7480]: I0308 21:57:41.145056 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:41.199956 master-0 kubenswrapper[7480]: I0308 21:57:41.199889 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:41.795522 master-0 kubenswrapper[7480]: I0308 21:57:41.795000 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c7c5e3-7bbb-4a43-8202-5603a869aee6" path="/var/lib/kubelet/pods/c0c7c5e3-7bbb-4a43-8202-5603a869aee6/volumes" Mar 08 21:57:41.858970 master-0 kubenswrapper[7480]: I0308 21:57:41.858889 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:41.860238 master-0 kubenswrapper[7480]: E0308 21:57:41.859176 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:41.860238 master-0 kubenswrapper[7480]: E0308 21:57:41.859244 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca podName:1ddab3a6-1c13-4476-abc5-1c65301ae173 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:43.859223565 +0000 UTC m=+14.312844177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca") pod "controller-manager-c4db5b54-m7wjt" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173") : configmap "client-ca" not found Mar 08 21:57:42.027027 master-0 kubenswrapper[7480]: I0308 21:57:42.026966 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:43.035114 master-0 kubenswrapper[7480]: I0308 21:57:43.034341 7480 generic.go:334] "Generic (PLEG): container finished" podID="d0641333-feda-44c5-baf5-ceee4ce3fd8f" containerID="5606fcf795565b19c0d649668bacd0041a38e917c804757278c207fde8081155" exitCode=0 Mar 08 21:57:43.035114 master-0 kubenswrapper[7480]: I0308 21:57:43.034390 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerDied","Data":"5606fcf795565b19c0d649668bacd0041a38e917c804757278c207fde8081155"} Mar 08 21:57:43.035114 master-0 kubenswrapper[7480]: I0308 21:57:43.034795 7480 scope.go:117] "RemoveContainer" containerID="5606fcf795565b19c0d649668bacd0041a38e917c804757278c207fde8081155" Mar 08 21:57:43.900017 master-0 kubenswrapper[7480]: I0308 21:57:43.899935 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:43.900334 master-0 kubenswrapper[7480]: E0308 21:57:43.900181 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:43.900334 master-0 kubenswrapper[7480]: E0308 21:57:43.900292 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca podName:1ddab3a6-1c13-4476-abc5-1c65301ae173 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:47.900267962 +0000 UTC m=+18.353888564 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca") pod "controller-manager-c4db5b54-m7wjt" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173") : configmap "client-ca" not found Mar 08 21:57:44.008927 master-0 kubenswrapper[7480]: I0308 21:57:44.008862 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:44.058399 master-0 kubenswrapper[7480]: I0308 21:57:44.057877 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:44.583539 master-0 kubenswrapper[7480]: I0308 21:57:44.583467 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-648655b4b4-k22kr"] Mar 08 21:57:44.584892 master-0 kubenswrapper[7480]: I0308 21:57:44.584830 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.592882 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.592959 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.593214 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.593392 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.593524 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.593684 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.593842 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 08 21:57:44.594158 master-0 kubenswrapper[7480]: I0308 21:57:44.593996 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 21:57:44.598034 master-0 kubenswrapper[7480]: I0308 21:57:44.597870 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 21:57:44.605033 master-0 kubenswrapper[7480]: I0308 21:57:44.605001 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 21:57:44.609084 master-0 kubenswrapper[7480]: I0308 21:57:44.608896 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-config\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609084 master-0 kubenswrapper[7480]: I0308 21:57:44.609038 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609278 master-0 kubenswrapper[7480]: I0308 21:57:44.609211 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-encryption-config\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609278 master-0 kubenswrapper[7480]: I0308 21:57:44.609260 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609366 master-0 kubenswrapper[7480]: I0308 21:57:44.609288 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-serving-ca\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609366 master-0 kubenswrapper[7480]: I0308 21:57:44.609322 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-trusted-ca-bundle\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609423 master-0 kubenswrapper[7480]: I0308 21:57:44.609389 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-node-pullsecrets\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609452 master-0 kubenswrapper[7480]: I0308 21:57:44.609429 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-image-import-ca\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609492 master-0 kubenswrapper[7480]: I0308 21:57:44.609469 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89t47\" (UniqueName: \"kubernetes.io/projected/6200cf99-d7d2-473f-856b-447430bc9b08-kube-api-access-89t47\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609525 master-0 kubenswrapper[7480]: I0308 21:57:44.609496 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-audit-dir\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.609598 master-0 kubenswrapper[7480]: I0308 21:57:44.609534 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.611318 master-0 kubenswrapper[7480]: I0308 21:57:44.611271 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-648655b4b4-k22kr"] Mar 08 21:57:44.711449 master-0 kubenswrapper[7480]: I0308 21:57:44.711387 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-serving-ca\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711463 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-trusted-ca-bundle\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711532 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-node-pullsecrets\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711556 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-image-import-ca\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711588 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89t47\" (UniqueName: \"kubernetes.io/projected/6200cf99-d7d2-473f-856b-447430bc9b08-kube-api-access-89t47\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711605 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-audit-dir\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711631 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711646 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-config\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711681 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711720 master-0 kubenswrapper[7480]: I0308 21:57:44.711721 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-encryption-config\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711978 master-0 kubenswrapper[7480]: I0308 21:57:44.711747 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.711978 master-0 kubenswrapper[7480]: E0308 21:57:44.711911 7480 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 08 21:57:44.711978 master-0 kubenswrapper[7480]: E0308 21:57:44.711977 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:45.211957238 +0000 UTC m=+15.665577840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "etcd-client" not found Mar 08 21:57:44.713125 master-0 kubenswrapper[7480]: E0308 21:57:44.712167 7480 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 21:57:44.713125 master-0 kubenswrapper[7480]: E0308 21:57:44.712192 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:45.212185694 +0000 UTC m=+15.665806296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "serving-cert" not found Mar 08 21:57:44.713235 master-0 kubenswrapper[7480]: I0308 21:57:44.713159 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-serving-ca\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.713449 master-0 kubenswrapper[7480]: I0308 21:57:44.713359 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-audit-dir\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.713449 master-0 kubenswrapper[7480]: I0308 21:57:44.713357 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-image-import-ca\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.713449 master-0 kubenswrapper[7480]: E0308 21:57:44.713435 7480 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 21:57:44.713575 master-0 kubenswrapper[7480]: I0308 21:57:44.713492 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-trusted-ca-bundle\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.713575 master-0 kubenswrapper[7480]: E0308 21:57:44.713521 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:45.213495128 +0000 UTC m=+15.667115940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : configmap "audit-0" not found Mar 08 21:57:44.713703 master-0 kubenswrapper[7480]: I0308 21:57:44.713671 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-node-pullsecrets\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.713790 master-0 kubenswrapper[7480]: I0308 21:57:44.713755 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-config\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.727107 master-0 kubenswrapper[7480]: I0308 21:57:44.723470 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-encryption-config\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.741891 master-0 kubenswrapper[7480]: I0308 21:57:44.741848 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89t47\" (UniqueName: \"kubernetes.io/projected/6200cf99-d7d2-473f-856b-447430bc9b08-kube-api-access-89t47\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:44.914220 master-0 kubenswrapper[7480]: I0308 21:57:44.913298 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:44.914220 master-0 kubenswrapper[7480]: I0308 21:57:44.913362 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:44.914220 master-0 kubenswrapper[7480]: E0308 21:57:44.913559 7480 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 21:57:44.914220 master-0 kubenswrapper[7480]: E0308 21:57:44.913616 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:52.913597359 +0000 UTC m=+23.367217961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : secret "serving-cert" not found Mar 08 21:57:44.914220 master-0 kubenswrapper[7480]: E0308 21:57:44.914028 7480 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:44.914220 master-0 kubenswrapper[7480]: E0308 21:57:44.914052 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca podName:838f81b9-0423-437e-88ed-88eebfe4c188 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:52.914045621 +0000 UTC m=+23.367666213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca") pod "route-controller-manager-685b849569-wt9mn" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188") : configmap "client-ca" not found Mar 08 21:57:45.217874 master-0 kubenswrapper[7480]: I0308 21:57:45.217703 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:45.217874 master-0 kubenswrapper[7480]: I0308 21:57:45.217771 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: E0308 21:57:45.217962 7480 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: E0308 21:57:45.218135 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.218064552 +0000 UTC m=+16.671685174 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "serving-cert" not found Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: I0308 21:57:45.218231 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: E0308 21:57:45.218252 7480 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: E0308 21:57:45.218376 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.21834653 +0000 UTC m=+16.671967182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : configmap "audit-0" not found Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: E0308 21:57:45.218466 7480 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 08 21:57:45.218609 master-0 kubenswrapper[7480]: E0308 21:57:45.218552 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:46.218540064 +0000 UTC m=+16.672160686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "etcd-client" not found Mar 08 21:57:46.239104 master-0 kubenswrapper[7480]: I0308 21:57:46.238796 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:46.239104 master-0 kubenswrapper[7480]: E0308 21:57:46.239063 7480 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 21:57:46.239730 master-0 kubenswrapper[7480]: E0308 21:57:46.239203 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:48.239172271 +0000 UTC m=+18.692792883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "serving-cert" not found Mar 08 21:57:46.239730 master-0 kubenswrapper[7480]: I0308 21:57:46.239332 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:46.239730 master-0 kubenswrapper[7480]: I0308 21:57:46.239416 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:46.239730 master-0 kubenswrapper[7480]: E0308 21:57:46.239698 7480 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 08 21:57:46.239730 master-0 kubenswrapper[7480]: E0308 21:57:46.239706 7480 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 21:57:46.239885 master-0 kubenswrapper[7480]: E0308 21:57:46.239756 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:48.239743906 +0000 UTC m=+18.693364608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : configmap "audit-0" not found Mar 08 21:57:46.239885 master-0 kubenswrapper[7480]: E0308 21:57:46.239785 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:48.239772886 +0000 UTC m=+18.693393628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "etcd-client" not found Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.543224 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.543309 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.543744 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.543833 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.544045 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: E0308 21:57:46.544132 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: E0308 21:57:46.544320 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert podName:3c50dd1f-fcbc-412c-a1cc-0738ea4464e0 nodeName:}" failed. No retries permitted until 2026-03-08 21:58:02.544277361 +0000 UTC m=+32.997898003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert") pod "olm-operator-d64cfc9db-xqh7x" (UID: "3c50dd1f-fcbc-412c-a1cc-0738ea4464e0") : secret "olm-operator-serving-cert" not found Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: E0308 21:57:46.544389 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.544445 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: E0308 21:57:46.544462 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert podName:be431b74-1116-4b0f-8b25-bbb0408411b0 nodeName:}" failed. No retries permitted until 2026-03-08 21:58:02.544439185 +0000 UTC m=+32.998059777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-x5zxr" (UID: "be431b74-1116-4b0f-8b25-bbb0408411b0") : secret "package-server-manager-serving-cert" not found Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.544632 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:57:46.544728 master-0 kubenswrapper[7480]: I0308 21:57:46.544732 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:46.545944 master-0 kubenswrapper[7480]: I0308 21:57:46.544835 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:46.545944 master-0 kubenswrapper[7480]: E0308 21:57:46.544879 7480 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 21:57:46.545944 master-0 kubenswrapper[7480]: I0308 21:57:46.544916 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:46.545944 master-0 kubenswrapper[7480]: E0308 21:57:46.545014 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert podName:83b5f0b6-adee-4820-8212-b4d182b178d2 nodeName:}" failed. No retries permitted until 2026-03-08 21:58:02.544980509 +0000 UTC m=+32.998601151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert") pod "catalog-operator-7d9c49f57b-6q5t2" (UID: "83b5f0b6-adee-4820-8212-b4d182b178d2") : secret "catalog-operator-serving-cert" not found Mar 08 21:57:46.551217 master-0 kubenswrapper[7480]: I0308 21:57:46.549402 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:46.551217 master-0 kubenswrapper[7480]: I0308 21:57:46.549427 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:46.551217 master-0 kubenswrapper[7480]: I0308 21:57:46.549989 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:46.551217 master-0 kubenswrapper[7480]: I0308 21:57:46.550053 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddw98\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:46.552049 master-0 kubenswrapper[7480]: I0308 21:57:46.551977 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:46.556066 master-0 kubenswrapper[7480]: I0308 21:57:46.555985 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:46.561187 master-0 kubenswrapper[7480]: I0308 21:57:46.561120 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:46.633942 master-0 kubenswrapper[7480]: I0308 21:57:46.633854 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 21:57:46.634207 master-0 kubenswrapper[7480]: I0308 21:57:46.633950 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 21:57:46.634207 master-0 kubenswrapper[7480]: I0308 21:57:46.633982 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 21:57:46.634207 master-0 kubenswrapper[7480]: I0308 21:57:46.633971 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 21:57:46.634207 master-0 kubenswrapper[7480]: I0308 21:57:46.634116 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:46.646757 master-0 kubenswrapper[7480]: I0308 21:57:46.646709 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 21:57:46.656695 master-0 kubenswrapper[7480]: I0308 21:57:46.656637 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 21:57:47.256129 master-0 kubenswrapper[7480]: I0308 21:57:47.254131 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddw98"] Mar 08 21:57:47.350891 master-0 kubenswrapper[7480]: I0308 21:57:47.350830 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wtvp5"] Mar 08 21:57:47.361802 master-0 kubenswrapper[7480]: W0308 21:57:47.361724 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf48e7e0_0659_48e2_9b6a_32c964ff47b2.slice/crio-de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687 WatchSource:0}: Error finding container de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687: Status 404 returned error can't find the container with id de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687 Mar 08 21:57:47.438359 master-0 kubenswrapper[7480]: I0308 21:57:47.438299 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484"] Mar 08 21:57:47.758106 master-0 kubenswrapper[7480]: I0308 21:57:47.757900 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lqdbv"] Mar 08 21:57:47.978864 master-0 kubenswrapper[7480]: I0308 21:57:47.978789 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") pod \"controller-manager-c4db5b54-m7wjt\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:47.979155 master-0 kubenswrapper[7480]: E0308 21:57:47.979008 7480 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 21:57:47.979197 master-0 kubenswrapper[7480]: E0308 21:57:47.979179 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca podName:1ddab3a6-1c13-4476-abc5-1c65301ae173 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:55.979144284 +0000 UTC m=+26.432764916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca") pod "controller-manager-c4db5b54-m7wjt" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173") : configmap "client-ca" not found Mar 08 21:57:48.080583 master-0 kubenswrapper[7480]: I0308 21:57:48.080504 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" event={"ID":"d287e2ca-f134-4e34-96f7-50a3055ee119","Type":"ContainerStarted","Data":"8d516b9f38991558f05c5da2875d325fa5984b9cedd39d8165f024180e98bc7a"} Mar 08 21:57:48.082129 master-0 kubenswrapper[7480]: I0308 21:57:48.082056 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" event={"ID":"df48e7e0-0659-48e2-9b6a-32c964ff47b2","Type":"ContainerStarted","Data":"de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687"} Mar 08 21:57:48.085340 master-0 kubenswrapper[7480]: I0308 21:57:48.085281 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerStarted","Data":"ba63e07913394038e6214607c806df6fc81079644bc68ca5910ad463422e98db"} Mar 08 21:57:48.085543 master-0 kubenswrapper[7480]: I0308 21:57:48.085511 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:48.087389 master-0 kubenswrapper[7480]: I0308 21:57:48.087313 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lqdbv" event={"ID":"44e67e41-045e-42ef-8f60-6ef15606d6a2","Type":"ContainerStarted","Data":"0de0dd88c4bba9f852c91550e6622cdfe9b4a30a405c23edc2a915817b573fec"} Mar 08 21:57:48.089548 master-0 kubenswrapper[7480]: I0308 21:57:48.089480 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerStarted","Data":"9a488623b815fc824bec74857e2960fc417072b53ab920bd8c886dd1a94fa5ae"} Mar 08 21:57:48.091193 master-0 kubenswrapper[7480]: I0308 21:57:48.091123 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" event={"ID":"1dfc8afd-2330-46a4-ae5b-36522102b332","Type":"ContainerStarted","Data":"ab657f98950abde628b198898d3905a5958a770bb1ea4d2bf6b9cc5f024cadc1"} Mar 08 21:57:48.092342 master-0 kubenswrapper[7480]: I0308 21:57:48.092298 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" event={"ID":"4ef806a4-5486-43a9-8bfa-b1670c888dc1","Type":"ContainerStarted","Data":"53b5043fd325310586d0ad90805405242c17d1ce6d248bad4d8308d740dacd52"} Mar 08 21:57:48.095674 master-0 kubenswrapper[7480]: I0308 21:57:48.095645 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerStarted","Data":"c1e691e59e7c1bed851b1abd3631d646daa0cf480534e0faeca027a9151c11dc"} Mar 08 21:57:48.278915 master-0 kubenswrapper[7480]: I0308 21:57:48.273111 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr"] Mar 08 21:57:48.290626 master-0 kubenswrapper[7480]: I0308 21:57:48.288788 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:48.290626 master-0 kubenswrapper[7480]: I0308 21:57:48.288857 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:48.290626 master-0 kubenswrapper[7480]: I0308 21:57:48.288911 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:48.302379 master-0 kubenswrapper[7480]: E0308 21:57:48.296367 7480 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 21:57:48.302379 master-0 kubenswrapper[7480]: E0308 21:57:48.296522 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:52.296488231 +0000 UTC m=+22.750108873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : secret "serving-cert" not found Mar 08 21:57:48.302379 master-0 kubenswrapper[7480]: W0308 21:57:48.299590 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda913c639_ebfc_42a3_85cd_8a460027d3ec.slice/crio-d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a WatchSource:0}: Error finding container d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a: Status 404 returned error can't find the container with id d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a Mar 08 21:57:48.303068 master-0 kubenswrapper[7480]: I0308 21:57:48.302743 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:48.310097 master-0 kubenswrapper[7480]: E0308 21:57:48.308130 7480 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 21:57:48.310097 master-0 kubenswrapper[7480]: E0308 21:57:48.308266 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:52.308235717 +0000 UTC m=+22.761856329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : configmap "audit-0" not found Mar 08 21:57:48.784121 master-0 kubenswrapper[7480]: I0308 21:57:48.781771 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh"] Mar 08 21:57:48.784121 master-0 kubenswrapper[7480]: I0308 21:57:48.781864 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-cjdgr"] Mar 08 21:57:49.106211 master-0 kubenswrapper[7480]: I0308 21:57:49.106125 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerStarted","Data":"d14eb63d678bcf527293b2268e60d6e7c54629d3617ad205aa85e0b95e38c0c8"} Mar 08 21:57:49.107296 master-0 kubenswrapper[7480]: I0308 21:57:49.107260 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerStarted","Data":"d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a"} Mar 08 21:57:49.109369 master-0 kubenswrapper[7480]: I0308 21:57:49.109257 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"9d2b94760fb5bd6c1ac833545141ede88958ba2ac4b1af0ff830a401107ab2f9"} Mar 08 21:57:49.304881 master-0 kubenswrapper[7480]: I0308 21:57:49.304808 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-rxbl5"] Mar 08 21:57:49.307034 master-0 kubenswrapper[7480]: I0308 21:57:49.307003 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415558 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-tuned\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415605 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwf9\" (UniqueName: \"kubernetes.io/projected/f3fbcd83-a3e1-4de1-aceb-2692d348e495-kube-api-access-5jwf9\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415629 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415667 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-host\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415706 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-kubernetes\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415722 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-run\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415743 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-lib-modules\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415758 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-modprobe-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415781 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-tmp\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415804 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysconfig\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415819 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-systemd\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415851 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-sys\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415866 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-conf\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.416100 master-0 kubenswrapper[7480]: I0308 21:57:49.415883 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-var-lib-kubelet\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.516629 master-0 kubenswrapper[7480]: I0308 21:57:49.516524 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-kubernetes\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.516901 master-0 kubenswrapper[7480]: I0308 21:57:49.516849 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-kubernetes\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517032 master-0 kubenswrapper[7480]: I0308 21:57:49.516928 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-run\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517032 master-0 kubenswrapper[7480]: I0308 21:57:49.516961 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-lib-modules\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517032 master-0 kubenswrapper[7480]: I0308 21:57:49.517002 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-modprobe-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517032 master-0 kubenswrapper[7480]: I0308 21:57:49.517023 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-tmp\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517197 master-0 kubenswrapper[7480]: I0308 21:57:49.517064 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysconfig\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517197 master-0 kubenswrapper[7480]: I0308 21:57:49.517118 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-systemd\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517197 master-0 kubenswrapper[7480]: I0308 21:57:49.517184 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-sys\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517282 master-0 kubenswrapper[7480]: I0308 21:57:49.517207 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-conf\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517282 master-0 kubenswrapper[7480]: I0308 21:57:49.517225 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-var-lib-kubelet\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517282 master-0 kubenswrapper[7480]: I0308 21:57:49.517264 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-tuned\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517359 master-0 kubenswrapper[7480]: I0308 21:57:49.517284 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwf9\" (UniqueName: \"kubernetes.io/projected/f3fbcd83-a3e1-4de1-aceb-2692d348e495-kube-api-access-5jwf9\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517359 master-0 kubenswrapper[7480]: I0308 21:57:49.517309 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517359 master-0 kubenswrapper[7480]: I0308 21:57:49.517340 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-host\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517434 master-0 kubenswrapper[7480]: I0308 21:57:49.517412 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-host\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517572 master-0 kubenswrapper[7480]: I0308 21:57:49.517555 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-run\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517657 master-0 kubenswrapper[7480]: I0308 21:57:49.517638 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-lib-modules\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.517711 master-0 kubenswrapper[7480]: I0308 21:57:49.517700 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-modprobe-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.518462 master-0 kubenswrapper[7480]: I0308 21:57:49.518431 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-conf\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.518531 master-0 kubenswrapper[7480]: I0308 21:57:49.518508 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysconfig\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.518583 master-0 kubenswrapper[7480]: I0308 21:57:49.518558 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-systemd\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.518630 master-0 kubenswrapper[7480]: I0308 21:57:49.518608 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-sys\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.519242 master-0 kubenswrapper[7480]: I0308 21:57:49.519210 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-var-lib-kubelet\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.519503 master-0 kubenswrapper[7480]: I0308 21:57:49.519444 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.523433 master-0 kubenswrapper[7480]: I0308 21:57:49.523382 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-tuned\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:49.523748 master-0 kubenswrapper[7480]: I0308 21:57:49.523732 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-tmp\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:50.039498 master-0 kubenswrapper[7480]: I0308 21:57:50.039281 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:50.039772 master-0 kubenswrapper[7480]: I0308 21:57:50.039509 7480 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 21:57:50.058135 master-0 kubenswrapper[7480]: I0308 21:57:50.058052 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 21:57:50.210766 master-0 kubenswrapper[7480]: I0308 21:57:50.206971 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwf9\" (UniqueName: \"kubernetes.io/projected/f3fbcd83-a3e1-4de1-aceb-2692d348e495-kube-api-access-5jwf9\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:50.216051 master-0 kubenswrapper[7480]: I0308 21:57:50.215980 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 21:57:50.338856 master-0 kubenswrapper[7480]: I0308 21:57:50.338376 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 21:57:51.550102 master-0 kubenswrapper[7480]: I0308 21:57:51.547806 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-648655b4b4-k22kr"] Mar 08 21:57:51.550102 master-0 kubenswrapper[7480]: E0308 21:57:51.548017 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-648655b4b4-k22kr" podUID="6200cf99-d7d2-473f-856b-447430bc9b08" Mar 08 21:57:51.684730 master-0 kubenswrapper[7480]: I0308 21:57:51.683560 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 21:57:51.684730 master-0 kubenswrapper[7480]: I0308 21:57:51.684096 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.686870 master-0 kubenswrapper[7480]: I0308 21:57:51.686815 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 08 21:57:51.698935 master-0 kubenswrapper[7480]: I0308 21:57:51.698895 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 21:57:51.749585 master-0 kubenswrapper[7480]: I0308 21:57:51.749511 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.749585 master-0 kubenswrapper[7480]: I0308 21:57:51.749568 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-var-lock\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.749849 master-0 kubenswrapper[7480]: I0308 21:57:51.749629 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kube-api-access\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.801583 master-0 kubenswrapper[7480]: I0308 21:57:51.801426 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 08 21:57:51.802169 master-0 kubenswrapper[7480]: I0308 21:57:51.802148 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.805275 master-0 kubenswrapper[7480]: I0308 21:57:51.805245 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 08 21:57:51.809700 master-0 kubenswrapper[7480]: I0308 21:57:51.809668 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 08 21:57:51.851033 master-0 kubenswrapper[7480]: I0308 21:57:51.850985 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kube-api-access\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.851226 master-0 kubenswrapper[7480]: I0308 21:57:51.851056 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.851226 master-0 kubenswrapper[7480]: I0308 21:57:51.851134 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.851226 master-0 kubenswrapper[7480]: I0308 21:57:51.851182 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-var-lock\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.851653 master-0 kubenswrapper[7480]: I0308 21:57:51.851296 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kube-api-access\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.851653 master-0 kubenswrapper[7480]: I0308 21:57:51.851340 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-var-lock\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.851653 master-0 kubenswrapper[7480]: I0308 21:57:51.851395 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.851653 master-0 kubenswrapper[7480]: I0308 21:57:51.851535 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-var-lock\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.882286 master-0 kubenswrapper[7480]: I0308 21:57:51.882232 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kube-api-access\") pod \"installer-1-master-0\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:51.953138 master-0 kubenswrapper[7480]: I0308 21:57:51.953050 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-var-lock\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.953388 master-0 kubenswrapper[7480]: I0308 21:57:51.953253 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-var-lock\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.953445 master-0 kubenswrapper[7480]: I0308 21:57:51.953371 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.953535 master-0 kubenswrapper[7480]: I0308 21:57:51.953493 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.953676 master-0 kubenswrapper[7480]: I0308 21:57:51.953644 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kube-api-access\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:51.979002 master-0 kubenswrapper[7480]: I0308 21:57:51.978941 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kube-api-access\") pod \"installer-1-master-0\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:52.027065 master-0 kubenswrapper[7480]: I0308 21:57:52.026990 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:57:52.144727 master-0 kubenswrapper[7480]: I0308 21:57:52.143804 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:52.154511 master-0 kubenswrapper[7480]: I0308 21:57:52.154452 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4db5b54-m7wjt"] Mar 08 21:57:52.175543 master-0 kubenswrapper[7480]: I0308 21:57:52.175471 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:52.176433 master-0 kubenswrapper[7480]: E0308 21:57:52.176360 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" podUID="1ddab3a6-1c13-4476-abc5-1c65301ae173" Mar 08 21:57:52.176685 master-0 kubenswrapper[7480]: I0308 21:57:52.176635 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 21:57:52.208221 master-0 kubenswrapper[7480]: I0308 21:57:52.207529 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn"] Mar 08 21:57:52.208221 master-0 kubenswrapper[7480]: E0308 21:57:52.207896 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" podUID="838f81b9-0423-437e-88ed-88eebfe4c188" Mar 08 21:57:52.278480 master-0 kubenswrapper[7480]: I0308 21:57:52.278402 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-config\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.278724 master-0 kubenswrapper[7480]: I0308 21:57:52.278577 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.278724 master-0 kubenswrapper[7480]: I0308 21:57:52.278611 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-image-import-ca\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.278724 master-0 kubenswrapper[7480]: I0308 21:57:52.278634 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-encryption-config\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.278919 master-0 kubenswrapper[7480]: I0308 21:57:52.278837 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-node-pullsecrets\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.278969 master-0 kubenswrapper[7480]: I0308 21:57:52.278955 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89t47\" (UniqueName: \"kubernetes.io/projected/6200cf99-d7d2-473f-856b-447430bc9b08-kube-api-access-89t47\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.279010 master-0 kubenswrapper[7480]: I0308 21:57:52.278964 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:52.279090 master-0 kubenswrapper[7480]: I0308 21:57:52.279045 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:57:52.279090 master-0 kubenswrapper[7480]: I0308 21:57:52.279001 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-audit-dir\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.279189 master-0 kubenswrapper[7480]: I0308 21:57:52.279161 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-trusted-ca-bundle\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.279232 master-0 kubenswrapper[7480]: I0308 21:57:52.279203 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-serving-ca\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.279326 master-0 kubenswrapper[7480]: I0308 21:57:52.279278 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:52.279644 master-0 kubenswrapper[7480]: I0308 21:57:52.279619 7480 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.279644 master-0 kubenswrapper[7480]: I0308 21:57:52.279646 7480 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.279751 master-0 kubenswrapper[7480]: I0308 21:57:52.279645 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:52.279751 master-0 kubenswrapper[7480]: I0308 21:57:52.279662 7480 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6200cf99-d7d2-473f-856b-447430bc9b08-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.279843 master-0 kubenswrapper[7480]: I0308 21:57:52.279801 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-config" (OuterVolumeSpecName: "config") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:52.280262 master-0 kubenswrapper[7480]: I0308 21:57:52.280233 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:52.282739 master-0 kubenswrapper[7480]: I0308 21:57:52.282694 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:52.282859 master-0 kubenswrapper[7480]: I0308 21:57:52.282834 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:52.283539 master-0 kubenswrapper[7480]: I0308 21:57:52.283512 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6200cf99-d7d2-473f-856b-447430bc9b08-kube-api-access-89t47" (OuterVolumeSpecName: "kube-api-access-89t47") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "kube-api-access-89t47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:52.381892 master-0 kubenswrapper[7480]: I0308 21:57:52.381820 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:52.381892 master-0 kubenswrapper[7480]: I0308 21:57:52.381897 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:52.382775 master-0 kubenswrapper[7480]: I0308 21:57:52.382700 7480 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.382775 master-0 kubenswrapper[7480]: I0308 21:57:52.382742 7480 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.382775 master-0 kubenswrapper[7480]: I0308 21:57:52.382754 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.382775 master-0 kubenswrapper[7480]: I0308 21:57:52.382766 7480 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.382775 master-0 kubenswrapper[7480]: I0308 21:57:52.382776 7480 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.382998 master-0 kubenswrapper[7480]: I0308 21:57:52.382789 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89t47\" (UniqueName: \"kubernetes.io/projected/6200cf99-d7d2-473f-856b-447430bc9b08-kube-api-access-89t47\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.382998 master-0 kubenswrapper[7480]: E0308 21:57:52.382863 7480 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 21:57:52.382998 master-0 kubenswrapper[7480]: E0308 21:57:52.382933 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit podName:6200cf99-d7d2-473f-856b-447430bc9b08 nodeName:}" failed. No retries permitted until 2026-03-08 21:58:00.382912439 +0000 UTC m=+30.836533041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit") pod "apiserver-648655b4b4-k22kr" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08") : configmap "audit-0" not found Mar 08 21:57:52.385126 master-0 kubenswrapper[7480]: I0308 21:57:52.385055 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"apiserver-648655b4b4-k22kr\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:52.483767 master-0 kubenswrapper[7480]: I0308 21:57:52.483666 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") pod \"6200cf99-d7d2-473f-856b-447430bc9b08\" (UID: \"6200cf99-d7d2-473f-856b-447430bc9b08\") " Mar 08 21:57:52.486889 master-0 kubenswrapper[7480]: I0308 21:57:52.486864 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6200cf99-d7d2-473f-856b-447430bc9b08" (UID: "6200cf99-d7d2-473f-856b-447430bc9b08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:52.585672 master-0 kubenswrapper[7480]: I0308 21:57:52.585611 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6200cf99-d7d2-473f-856b-447430bc9b08-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:52.989575 master-0 kubenswrapper[7480]: I0308 21:57:52.989508 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:52.989575 master-0 kubenswrapper[7480]: I0308 21:57:52.989576 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:52.990881 master-0 kubenswrapper[7480]: I0308 21:57:52.990853 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:52.994631 master-0 kubenswrapper[7480]: I0308 21:57:52.994553 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"route-controller-manager-685b849569-wt9mn\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:53.148496 master-0 kubenswrapper[7480]: I0308 21:57:53.148390 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:53.148870 master-0 kubenswrapper[7480]: I0308 21:57:53.148401 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-648655b4b4-k22kr" Mar 08 21:57:53.149047 master-0 kubenswrapper[7480]: I0308 21:57:53.148453 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:53.161417 master-0 kubenswrapper[7480]: I0308 21:57:53.161350 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:53.178458 master-0 kubenswrapper[7480]: I0308 21:57:53.178390 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:53.197716 master-0 kubenswrapper[7480]: I0308 21:57:53.195459 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6f9445b8fd-w44n6"] Mar 08 21:57:53.197716 master-0 kubenswrapper[7480]: I0308 21:57:53.196352 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.197716 master-0 kubenswrapper[7480]: I0308 21:57:53.197530 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-648655b4b4-k22kr"] Mar 08 21:57:53.202461 master-0 kubenswrapper[7480]: I0308 21:57:53.198984 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 21:57:53.202461 master-0 kubenswrapper[7480]: I0308 21:57:53.199310 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 21:57:53.202461 master-0 kubenswrapper[7480]: I0308 21:57:53.199698 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 21:57:53.202461 master-0 kubenswrapper[7480]: I0308 21:57:53.200535 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 21:57:53.202461 master-0 kubenswrapper[7480]: I0308 21:57:53.202309 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 21:57:53.202913 master-0 kubenswrapper[7480]: I0308 21:57:53.202837 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-648655b4b4-k22kr"] Mar 08 21:57:53.203165 master-0 kubenswrapper[7480]: I0308 21:57:53.203119 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 21:57:53.204864 master-0 kubenswrapper[7480]: I0308 21:57:53.203516 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 21:57:53.204864 master-0 kubenswrapper[7480]: I0308 21:57:53.203611 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 21:57:53.204864 master-0 kubenswrapper[7480]: I0308 21:57:53.204757 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 21:57:53.206760 master-0 kubenswrapper[7480]: I0308 21:57:53.206364 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 21:57:53.214250 master-0 kubenswrapper[7480]: I0308 21:57:53.214201 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6f9445b8fd-w44n6"] Mar 08 21:57:53.294400 master-0 kubenswrapper[7480]: I0308 21:57:53.293204 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-config\") pod \"1ddab3a6-1c13-4476-abc5-1c65301ae173\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " Mar 08 21:57:53.295017 master-0 kubenswrapper[7480]: I0308 21:57:53.294938 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tn4g\" (UniqueName: \"kubernetes.io/projected/838f81b9-0423-437e-88ed-88eebfe4c188-kube-api-access-4tn4g\") pod \"838f81b9-0423-437e-88ed-88eebfe4c188\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " Mar 08 21:57:53.295165 master-0 kubenswrapper[7480]: I0308 21:57:53.294910 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-config" (OuterVolumeSpecName: "config") pod "1ddab3a6-1c13-4476-abc5-1c65301ae173" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:53.295426 master-0 kubenswrapper[7480]: I0308 21:57:53.295255 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-proxy-ca-bundles\") pod \"1ddab3a6-1c13-4476-abc5-1c65301ae173\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " Mar 08 21:57:53.295426 master-0 kubenswrapper[7480]: I0308 21:57:53.295305 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-config\") pod \"838f81b9-0423-437e-88ed-88eebfe4c188\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " Mar 08 21:57:53.295426 master-0 kubenswrapper[7480]: I0308 21:57:53.295333 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") pod \"838f81b9-0423-437e-88ed-88eebfe4c188\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " Mar 08 21:57:53.295426 master-0 kubenswrapper[7480]: I0308 21:57:53.295351 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wggcz\" (UniqueName: \"kubernetes.io/projected/1ddab3a6-1c13-4476-abc5-1c65301ae173-kube-api-access-wggcz\") pod \"1ddab3a6-1c13-4476-abc5-1c65301ae173\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " Mar 08 21:57:53.295426 master-0 kubenswrapper[7480]: I0308 21:57:53.295381 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddab3a6-1c13-4476-abc5-1c65301ae173-serving-cert\") pod \"1ddab3a6-1c13-4476-abc5-1c65301ae173\" (UID: \"1ddab3a6-1c13-4476-abc5-1c65301ae173\") " Mar 08 21:57:53.296011 master-0 kubenswrapper[7480]: I0308 21:57:53.295866 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1ddab3a6-1c13-4476-abc5-1c65301ae173" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:53.296267 master-0 kubenswrapper[7480]: I0308 21:57:53.296021 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-config" (OuterVolumeSpecName: "config") pod "838f81b9-0423-437e-88ed-88eebfe4c188" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:53.296591 master-0 kubenswrapper[7480]: I0308 21:57:53.296360 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") pod \"838f81b9-0423-437e-88ed-88eebfe4c188\" (UID: \"838f81b9-0423-437e-88ed-88eebfe4c188\") " Mar 08 21:57:53.296926 master-0 kubenswrapper[7480]: I0308 21:57:53.296896 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-image-import-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297206 master-0 kubenswrapper[7480]: I0308 21:57:53.297094 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-node-pullsecrets\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297206 master-0 kubenswrapper[7480]: I0308 21:57:53.297116 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-serving-cert\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297206 master-0 kubenswrapper[7480]: I0308 21:57:53.297117 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca" (OuterVolumeSpecName: "client-ca") pod "838f81b9-0423-437e-88ed-88eebfe4c188" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:57:53.297206 master-0 kubenswrapper[7480]: I0308 21:57:53.297148 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpb8q\" (UniqueName: \"kubernetes.io/projected/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-kube-api-access-lpb8q\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297206 master-0 kubenswrapper[7480]: I0308 21:57:53.297167 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-serving-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297292 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-client\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297335 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297355 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-encryption-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297384 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-trusted-ca-bundle\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297418 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit-dir\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297448 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.297614 master-0 kubenswrapper[7480]: I0308 21:57:53.297591 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.298029 master-0 kubenswrapper[7480]: I0308 21:57:53.297630 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.298029 master-0 kubenswrapper[7480]: I0308 21:57:53.297672 7480 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.298029 master-0 kubenswrapper[7480]: I0308 21:57:53.297690 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/838f81b9-0423-437e-88ed-88eebfe4c188-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.298029 master-0 kubenswrapper[7480]: I0308 21:57:53.297700 7480 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6200cf99-d7d2-473f-856b-447430bc9b08-audit\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.298832 master-0 kubenswrapper[7480]: I0308 21:57:53.298801 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838f81b9-0423-437e-88ed-88eebfe4c188-kube-api-access-4tn4g" (OuterVolumeSpecName: "kube-api-access-4tn4g") pod "838f81b9-0423-437e-88ed-88eebfe4c188" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188"). InnerVolumeSpecName "kube-api-access-4tn4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:53.299838 master-0 kubenswrapper[7480]: I0308 21:57:53.299780 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddab3a6-1c13-4476-abc5-1c65301ae173-kube-api-access-wggcz" (OuterVolumeSpecName: "kube-api-access-wggcz") pod "1ddab3a6-1c13-4476-abc5-1c65301ae173" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173"). InnerVolumeSpecName "kube-api-access-wggcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:57:53.300555 master-0 kubenswrapper[7480]: I0308 21:57:53.300523 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "838f81b9-0423-437e-88ed-88eebfe4c188" (UID: "838f81b9-0423-437e-88ed-88eebfe4c188"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:53.304225 master-0 kubenswrapper[7480]: I0308 21:57:53.304200 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ddab3a6-1c13-4476-abc5-1c65301ae173-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ddab3a6-1c13-4476-abc5-1c65301ae173" (UID: "1ddab3a6-1c13-4476-abc5-1c65301ae173"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:57:53.399362 master-0 kubenswrapper[7480]: I0308 21:57:53.399223 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-image-import-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399362 master-0 kubenswrapper[7480]: I0308 21:57:53.399283 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-node-pullsecrets\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399362 master-0 kubenswrapper[7480]: I0308 21:57:53.399313 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-serving-cert\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399362 master-0 kubenswrapper[7480]: I0308 21:57:53.399339 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-serving-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399362 master-0 kubenswrapper[7480]: I0308 21:57:53.399360 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpb8q\" (UniqueName: \"kubernetes.io/projected/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-kube-api-access-lpb8q\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399393 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-client\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399436 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399457 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-encryption-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399481 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-trusted-ca-bundle\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399506 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit-dir\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399541 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399582 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/838f81b9-0423-437e-88ed-88eebfe4c188-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399596 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wggcz\" (UniqueName: \"kubernetes.io/projected/1ddab3a6-1c13-4476-abc5-1c65301ae173-kube-api-access-wggcz\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399610 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddab3a6-1c13-4476-abc5-1c65301ae173-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.399689 master-0 kubenswrapper[7480]: I0308 21:57:53.399624 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tn4g\" (UniqueName: \"kubernetes.io/projected/838f81b9-0423-437e-88ed-88eebfe4c188-kube-api-access-4tn4g\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:53.401180 master-0 kubenswrapper[7480]: I0308 21:57:53.400114 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit-dir\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.401180 master-0 kubenswrapper[7480]: I0308 21:57:53.400380 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-node-pullsecrets\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.401180 master-0 kubenswrapper[7480]: I0308 21:57:53.400451 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.401180 master-0 kubenswrapper[7480]: I0308 21:57:53.400621 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-image-import-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.401180 master-0 kubenswrapper[7480]: I0308 21:57:53.400778 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-serving-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.401180 master-0 kubenswrapper[7480]: I0308 21:57:53.401033 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.401399 master-0 kubenswrapper[7480]: I0308 21:57:53.401315 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-trusted-ca-bundle\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.403602 master-0 kubenswrapper[7480]: I0308 21:57:53.403537 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-encryption-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.407170 master-0 kubenswrapper[7480]: I0308 21:57:53.407104 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-serving-cert\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.414384 master-0 kubenswrapper[7480]: I0308 21:57:53.413637 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-client\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.419456 master-0 kubenswrapper[7480]: I0308 21:57:53.419337 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpb8q\" (UniqueName: \"kubernetes.io/projected/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-kube-api-access-lpb8q\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.528604 master-0 kubenswrapper[7480]: I0308 21:57:53.527944 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:57:53.788816 master-0 kubenswrapper[7480]: I0308 21:57:53.787053 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6200cf99-d7d2-473f-856b-447430bc9b08" path="/var/lib/kubelet/pods/6200cf99-d7d2-473f-856b-447430bc9b08/volumes" Mar 08 21:57:54.154363 master-0 kubenswrapper[7480]: I0308 21:57:54.154276 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4db5b54-m7wjt" Mar 08 21:57:54.154645 master-0 kubenswrapper[7480]: I0308 21:57:54.154388 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn" Mar 08 21:57:54.203279 master-0 kubenswrapper[7480]: I0308 21:57:54.203184 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn"] Mar 08 21:57:54.209675 master-0 kubenswrapper[7480]: I0308 21:57:54.209610 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685b849569-wt9mn"] Mar 08 21:57:54.246561 master-0 kubenswrapper[7480]: I0308 21:57:54.246482 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4db5b54-m7wjt"] Mar 08 21:57:54.253295 master-0 kubenswrapper[7480]: I0308 21:57:54.253249 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c4db5b54-m7wjt"] Mar 08 21:57:54.319889 master-0 kubenswrapper[7480]: I0308 21:57:54.319831 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ddab3a6-1c13-4476-abc5-1c65301ae173-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:57:55.583999 master-0 kubenswrapper[7480]: I0308 21:57:55.583930 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq"] Mar 08 21:57:55.585121 master-0 kubenswrapper[7480]: I0308 21:57:55.585068 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.590349 master-0 kubenswrapper[7480]: I0308 21:57:55.590310 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:55.592142 master-0 kubenswrapper[7480]: I0308 21:57:55.590708 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 21:57:55.592142 master-0 kubenswrapper[7480]: I0308 21:57:55.590983 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n"] Mar 08 21:57:55.592142 master-0 kubenswrapper[7480]: I0308 21:57:55.591068 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 21:57:55.592142 master-0 kubenswrapper[7480]: I0308 21:57:55.591343 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 21:57:55.592142 master-0 kubenswrapper[7480]: I0308 21:57:55.591397 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.592142 master-0 kubenswrapper[7480]: I0308 21:57:55.591403 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:55.597099 master-0 kubenswrapper[7480]: I0308 21:57:55.596997 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 21:57:55.605397 master-0 kubenswrapper[7480]: I0308 21:57:55.605346 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 21:57:55.609650 master-0 kubenswrapper[7480]: I0308 21:57:55.609606 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 21:57:55.609795 master-0 kubenswrapper[7480]: I0308 21:57:55.609663 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 21:57:55.613081 master-0 kubenswrapper[7480]: I0308 21:57:55.613036 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 21:57:55.615055 master-0 kubenswrapper[7480]: I0308 21:57:55.614971 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq"] Mar 08 21:57:55.617533 master-0 kubenswrapper[7480]: I0308 21:57:55.617498 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 21:57:55.624190 master-0 kubenswrapper[7480]: I0308 21:57:55.624054 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n"] Mar 08 21:57:55.739223 master-0 kubenswrapper[7480]: I0308 21:57:55.739170 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-config\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.739223 master-0 kubenswrapper[7480]: I0308 21:57:55.739216 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-serving-cert\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739245 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-config\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739277 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-client-ca\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739323 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhv7g\" (UniqueName: \"kubernetes.io/projected/6366c13e-beef-4918-991a-33acee9110e1-kube-api-access-mhv7g\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739377 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-proxy-ca-bundles\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739408 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6366c13e-beef-4918-991a-33acee9110e1-serving-cert\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739431 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-client-ca\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.739718 master-0 kubenswrapper[7480]: I0308 21:57:55.739469 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glf9l\" (UniqueName: \"kubernetes.io/projected/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-kube-api-access-glf9l\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.786268 master-0 kubenswrapper[7480]: I0308 21:57:55.786225 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ddab3a6-1c13-4476-abc5-1c65301ae173" path="/var/lib/kubelet/pods/1ddab3a6-1c13-4476-abc5-1c65301ae173/volumes" Mar 08 21:57:55.786620 master-0 kubenswrapper[7480]: I0308 21:57:55.786604 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838f81b9-0423-437e-88ed-88eebfe4c188" path="/var/lib/kubelet/pods/838f81b9-0423-437e-88ed-88eebfe4c188/volumes" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840426 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-proxy-ca-bundles\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840500 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6366c13e-beef-4918-991a-33acee9110e1-serving-cert\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840705 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-client-ca\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840858 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glf9l\" (UniqueName: \"kubernetes.io/projected/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-kube-api-access-glf9l\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840900 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-config\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840916 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-serving-cert\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840939 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-config\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.840964 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-client-ca\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.841339 master-0 kubenswrapper[7480]: I0308 21:57:55.841001 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhv7g\" (UniqueName: \"kubernetes.io/projected/6366c13e-beef-4918-991a-33acee9110e1-kube-api-access-mhv7g\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.842166 master-0 kubenswrapper[7480]: I0308 21:57:55.842044 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-client-ca\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.842166 master-0 kubenswrapper[7480]: I0308 21:57:55.842145 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-proxy-ca-bundles\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.844142 master-0 kubenswrapper[7480]: I0308 21:57:55.843961 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-config\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.844142 master-0 kubenswrapper[7480]: I0308 21:57:55.844122 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-client-ca\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.846080 master-0 kubenswrapper[7480]: I0308 21:57:55.846018 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-config\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.850576 master-0 kubenswrapper[7480]: I0308 21:57:55.850522 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-serving-cert\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.852514 master-0 kubenswrapper[7480]: I0308 21:57:55.852469 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6366c13e-beef-4918-991a-33acee9110e1-serving-cert\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.859233 master-0 kubenswrapper[7480]: I0308 21:57:55.859181 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glf9l\" (UniqueName: \"kubernetes.io/projected/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-kube-api-access-glf9l\") pod \"controller-manager-5bf6f788bb-vmt9n\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:55.861920 master-0 kubenswrapper[7480]: I0308 21:57:55.861878 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhv7g\" (UniqueName: \"kubernetes.io/projected/6366c13e-beef-4918-991a-33acee9110e1-kube-api-access-mhv7g\") pod \"route-controller-manager-6584845c9c-w4jhq\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.956628 master-0 kubenswrapper[7480]: I0308 21:57:55.956538 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:57:55.970285 master-0 kubenswrapper[7480]: I0308 21:57:55.970237 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:57:56.304421 master-0 kubenswrapper[7480]: W0308 21:57:56.304342 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fbcd83_a3e1_4de1_aceb_2692d348e495.slice/crio-7209969f44f9ab5882d68093e19acf5d06b62971db17a4f1d85b7a48c8b7b602 WatchSource:0}: Error finding container 7209969f44f9ab5882d68093e19acf5d06b62971db17a4f1d85b7a48c8b7b602: Status 404 returned error can't find the container with id 7209969f44f9ab5882d68093e19acf5d06b62971db17a4f1d85b7a48c8b7b602 Mar 08 21:57:56.525671 master-0 kubenswrapper[7480]: I0308 21:57:56.525099 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 21:57:56.563691 master-0 kubenswrapper[7480]: I0308 21:57:56.563628 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 08 21:57:56.566983 master-0 kubenswrapper[7480]: W0308 21:57:56.566504 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod65148321_8caf_4e9c_80cc_ced8e2a8ac03.slice/crio-454aa3f28a441e0884b9b6514f179a846a609d67518a83cc9ce725de23e88a51 WatchSource:0}: Error finding container 454aa3f28a441e0884b9b6514f179a846a609d67518a83cc9ce725de23e88a51: Status 404 returned error can't find the container with id 454aa3f28a441e0884b9b6514f179a846a609d67518a83cc9ce725de23e88a51 Mar 08 21:57:56.601901 master-0 kubenswrapper[7480]: W0308 21:57:56.600799 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod57a34dbc_eb6d_44f5_b1aa_4762b69382ed.slice/crio-acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2 WatchSource:0}: Error finding container acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2: Status 404 returned error can't find the container with id acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2 Mar 08 21:57:56.620373 master-0 kubenswrapper[7480]: I0308 21:57:56.619205 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n"] Mar 08 21:57:56.656604 master-0 kubenswrapper[7480]: I0308 21:57:56.655583 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq"] Mar 08 21:57:56.681277 master-0 kubenswrapper[7480]: W0308 21:57:56.680969 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6366c13e_beef_4918_991a_33acee9110e1.slice/crio-50600a8aafbac81fe6228bdc6e1f392621a39a20a9f82da05589d2c77d0ad50e WatchSource:0}: Error finding container 50600a8aafbac81fe6228bdc6e1f392621a39a20a9f82da05589d2c77d0ad50e: Status 404 returned error can't find the container with id 50600a8aafbac81fe6228bdc6e1f392621a39a20a9f82da05589d2c77d0ad50e Mar 08 21:57:56.683587 master-0 kubenswrapper[7480]: I0308 21:57:56.683552 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6f9445b8fd-w44n6"] Mar 08 21:57:56.727272 master-0 kubenswrapper[7480]: W0308 21:57:56.727193 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac6c9ea4_84d0_4159_8727_8eff9c7b4a7a.slice/crio-6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8 WatchSource:0}: Error finding container 6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8: Status 404 returned error can't find the container with id 6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8 Mar 08 21:57:57.184656 master-0 kubenswrapper[7480]: I0308 21:57:57.183647 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" event={"ID":"1dfc8afd-2330-46a4-ae5b-36522102b332","Type":"ContainerStarted","Data":"fa30505314844ca92e33f96b4695dfb9bc34ac5a945fbb42bad40ad5f234fa56"} Mar 08 21:57:57.189024 master-0 kubenswrapper[7480]: I0308 21:57:57.187338 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerStarted","Data":"6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8"} Mar 08 21:57:57.189024 master-0 kubenswrapper[7480]: I0308 21:57:57.188021 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" event={"ID":"6366c13e-beef-4918-991a-33acee9110e1","Type":"ContainerStarted","Data":"50600a8aafbac81fe6228bdc6e1f392621a39a20a9f82da05589d2c77d0ad50e"} Mar 08 21:57:57.200776 master-0 kubenswrapper[7480]: I0308 21:57:57.191244 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"57a34dbc-eb6d-44f5-b1aa-4762b69382ed","Type":"ContainerStarted","Data":"acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2"} Mar 08 21:57:57.200776 master-0 kubenswrapper[7480]: I0308 21:57:57.196539 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" event={"ID":"4ef806a4-5486-43a9-8bfa-b1670c888dc1","Type":"ContainerStarted","Data":"4342a61fe3f90cd7b16242cf101e42393f0a324541ef3f468a990da5fedcc62f"} Mar 08 21:57:57.200776 master-0 kubenswrapper[7480]: I0308 21:57:57.199938 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"9164ea1a943910cf9b8dc2033e053c71543704a60a430439fd1cb5398e260074"} Mar 08 21:57:57.200776 master-0 kubenswrapper[7480]: I0308 21:57:57.199972 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"1a0df161078208a525b4d1fb6d4ca6198700570b496ec5545cc3b9587304d8a5"} Mar 08 21:57:57.206389 master-0 kubenswrapper[7480]: I0308 21:57:57.205430 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"65148321-8caf-4e9c-80cc-ced8e2a8ac03","Type":"ContainerStarted","Data":"454aa3f28a441e0884b9b6514f179a846a609d67518a83cc9ce725de23e88a51"} Mar 08 21:57:57.207756 master-0 kubenswrapper[7480]: I0308 21:57:57.207516 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" event={"ID":"f3fbcd83-a3e1-4de1-aceb-2692d348e495","Type":"ContainerStarted","Data":"dc257d9f0b8b7220092c839e36e620d477c42e50b90f4361868af98eec13ba42"} Mar 08 21:57:57.207756 master-0 kubenswrapper[7480]: I0308 21:57:57.207542 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" event={"ID":"f3fbcd83-a3e1-4de1-aceb-2692d348e495","Type":"ContainerStarted","Data":"7209969f44f9ab5882d68093e19acf5d06b62971db17a4f1d85b7a48c8b7b602"} Mar 08 21:57:57.217760 master-0 kubenswrapper[7480]: I0308 21:57:57.216707 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerStarted","Data":"c7c62eecaac8f5df8b2da98122fad8c96cfc54251fbf2aa75a9ba067018db826"} Mar 08 21:57:57.217760 master-0 kubenswrapper[7480]: I0308 21:57:57.216973 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:57.220541 master-0 kubenswrapper[7480]: I0308 21:57:57.219902 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 21:57:57.220541 master-0 kubenswrapper[7480]: I0308 21:57:57.219990 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 21:57:57.238115 master-0 kubenswrapper[7480]: I0308 21:57:57.237875 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" event={"ID":"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d","Type":"ContainerStarted","Data":"9b4ee3f8afba95786d7e7f99f9f6f2c9cf49a581eb96cff61ba3f8907df4b5b9"} Mar 08 21:57:57.240132 master-0 kubenswrapper[7480]: I0308 21:57:57.239501 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" event={"ID":"df48e7e0-0659-48e2-9b6a-32c964ff47b2","Type":"ContainerStarted","Data":"d5596dd51e8955a57e6a69ba7f458a212f6bf75496f2cc7496253f96efcdeccc"} Mar 08 21:57:57.244319 master-0 kubenswrapper[7480]: I0308 21:57:57.241153 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerStarted","Data":"8bf41d7f7f99e2d4fdb83a25a837511d4994d2551b185499c8662f2b6ce0defe"} Mar 08 21:57:57.244319 master-0 kubenswrapper[7480]: I0308 21:57:57.242644 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lqdbv" event={"ID":"44e67e41-045e-42ef-8f60-6ef15606d6a2","Type":"ContainerStarted","Data":"5f33344d5680163a9b22b7300b7c2175a35231534f35082c09b01e820a94217d"} Mar 08 21:57:57.284890 master-0 kubenswrapper[7480]: I0308 21:57:57.284809 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" podStartSLOduration=9.284783241 podStartE2EDuration="9.284783241s" podCreationTimestamp="2026-03-08 21:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:57.282440599 +0000 UTC m=+27.736061201" watchObservedRunningTime="2026-03-08 21:57:57.284783241 +0000 UTC m=+27.738403843" Mar 08 21:57:57.713180 master-0 kubenswrapper[7480]: I0308 21:57:57.712625 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-65ts8"] Mar 08 21:57:57.714305 master-0 kubenswrapper[7480]: I0308 21:57:57.713466 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.721086 master-0 kubenswrapper[7480]: I0308 21:57:57.717840 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 21:57:57.721086 master-0 kubenswrapper[7480]: I0308 21:57:57.717873 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 21:57:57.721086 master-0 kubenswrapper[7480]: I0308 21:57:57.718182 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 21:57:57.721086 master-0 kubenswrapper[7480]: I0308 21:57:57.720933 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 21:57:57.745088 master-0 kubenswrapper[7480]: I0308 21:57:57.741197 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-65ts8"] Mar 08 21:57:57.789130 master-0 kubenswrapper[7480]: I0308 21:57:57.787309 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2dp\" (UniqueName: \"kubernetes.io/projected/0cb21214-292a-48ee-85e2-6b1e62f40cb4-kube-api-access-sg2dp\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.789130 master-0 kubenswrapper[7480]: I0308 21:57:57.787363 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb21214-292a-48ee-85e2-6b1e62f40cb4-config-volume\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.789130 master-0 kubenswrapper[7480]: I0308 21:57:57.787393 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.892680 master-0 kubenswrapper[7480]: I0308 21:57:57.890579 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2dp\" (UniqueName: \"kubernetes.io/projected/0cb21214-292a-48ee-85e2-6b1e62f40cb4-kube-api-access-sg2dp\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.892680 master-0 kubenswrapper[7480]: I0308 21:57:57.890682 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb21214-292a-48ee-85e2-6b1e62f40cb4-config-volume\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.892680 master-0 kubenswrapper[7480]: I0308 21:57:57.890711 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.892680 master-0 kubenswrapper[7480]: E0308 21:57:57.890845 7480 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 08 21:57:57.892680 master-0 kubenswrapper[7480]: E0308 21:57:57.890903 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls podName:0cb21214-292a-48ee-85e2-6b1e62f40cb4 nodeName:}" failed. No retries permitted until 2026-03-08 21:57:58.390889584 +0000 UTC m=+28.844510176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls") pod "dns-default-65ts8" (UID: "0cb21214-292a-48ee-85e2-6b1e62f40cb4") : secret "dns-default-metrics-tls" not found Mar 08 21:57:57.892680 master-0 kubenswrapper[7480]: I0308 21:57:57.892114 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb21214-292a-48ee-85e2-6b1e62f40cb4-config-volume\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:57.935476 master-0 kubenswrapper[7480]: I0308 21:57:57.933349 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2dp\" (UniqueName: \"kubernetes.io/projected/0cb21214-292a-48ee-85e2-6b1e62f40cb4-kube-api-access-sg2dp\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:58.102293 master-0 kubenswrapper[7480]: I0308 21:57:58.102234 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6bf768964c-srxfg"] Mar 08 21:57:58.102878 master-0 kubenswrapper[7480]: I0308 21:57:58.102853 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.107567 master-0 kubenswrapper[7480]: I0308 21:57:58.107542 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 21:57:58.107623 master-0 kubenswrapper[7480]: I0308 21:57:58.107574 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 21:57:58.109828 master-0 kubenswrapper[7480]: I0308 21:57:58.107671 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 21:57:58.109828 master-0 kubenswrapper[7480]: I0308 21:57:58.107701 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 21:57:58.109828 master-0 kubenswrapper[7480]: I0308 21:57:58.107735 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 21:57:58.109828 master-0 kubenswrapper[7480]: I0308 21:57:58.107809 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 21:57:58.109828 master-0 kubenswrapper[7480]: I0308 21:57:58.107868 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 21:57:58.109828 master-0 kubenswrapper[7480]: I0308 21:57:58.108025 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 21:57:58.130131 master-0 kubenswrapper[7480]: I0308 21:57:58.128503 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6bf768964c-srxfg"] Mar 08 21:57:58.195212 master-0 kubenswrapper[7480]: I0308 21:57:58.195141 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-audit-policies\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195212 master-0 kubenswrapper[7480]: I0308 21:57:58.195204 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-serving-ca\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195212 master-0 kubenswrapper[7480]: I0308 21:57:58.195228 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5afb146-31d7-4da9-8738-b6c15528233a-audit-dir\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195551 master-0 kubenswrapper[7480]: I0308 21:57:58.195402 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvp5b\" (UniqueName: \"kubernetes.io/projected/a5afb146-31d7-4da9-8738-b6c15528233a-kube-api-access-mvp5b\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195551 master-0 kubenswrapper[7480]: I0308 21:57:58.195487 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-client\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195551 master-0 kubenswrapper[7480]: I0308 21:57:58.195521 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-trusted-ca-bundle\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195666 master-0 kubenswrapper[7480]: I0308 21:57:58.195553 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-serving-cert\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.195666 master-0 kubenswrapper[7480]: I0308 21:57:58.195602 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-encryption-config\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.250352 master-0 kubenswrapper[7480]: I0308 21:57:58.250299 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" event={"ID":"df48e7e0-0659-48e2-9b6a-32c964ff47b2","Type":"ContainerStarted","Data":"e3a3f13da6709438b132d9eca172683a5c6defc158c9c31ccc673ac74fd4d281"} Mar 08 21:57:58.253502 master-0 kubenswrapper[7480]: I0308 21:57:58.253473 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lqdbv" event={"ID":"44e67e41-045e-42ef-8f60-6ef15606d6a2","Type":"ContainerStarted","Data":"df5b0088e640f400af20d24a7b6f80fb2cd20c3d0136567239df8b0010e7bdef"} Mar 08 21:57:58.258283 master-0 kubenswrapper[7480]: I0308 21:57:58.258234 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"65148321-8caf-4e9c-80cc-ced8e2a8ac03","Type":"ContainerStarted","Data":"00da65f85d6a396bd144d8af9fedcda14ea9c9016de2176d13648b00d0ef6d29"} Mar 08 21:57:58.263354 master-0 kubenswrapper[7480]: I0308 21:57:58.263322 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" event={"ID":"1dfc8afd-2330-46a4-ae5b-36522102b332","Type":"ContainerStarted","Data":"b9a377863624adb6bc6cea75cc961084a7220374ccf2adc5f27393ba6245e41b"} Mar 08 21:57:58.266561 master-0 kubenswrapper[7480]: I0308 21:57:58.266524 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"57a34dbc-eb6d-44f5-b1aa-4762b69382ed","Type":"ContainerStarted","Data":"11d598a821a501bbacbf414ba9cb9b4053b94492a8ef82c31d41892148ed5df2"} Mar 08 21:57:58.285285 master-0 kubenswrapper[7480]: I0308 21:57:58.285212 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 21:57:58.296480 master-0 kubenswrapper[7480]: I0308 21:57:58.296422 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-audit-policies\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296480 master-0 kubenswrapper[7480]: I0308 21:57:58.296477 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-serving-ca\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296480 master-0 kubenswrapper[7480]: I0308 21:57:58.296496 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5afb146-31d7-4da9-8738-b6c15528233a-audit-dir\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296795 master-0 kubenswrapper[7480]: I0308 21:57:58.296530 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvp5b\" (UniqueName: \"kubernetes.io/projected/a5afb146-31d7-4da9-8738-b6c15528233a-kube-api-access-mvp5b\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296795 master-0 kubenswrapper[7480]: I0308 21:57:58.296564 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-client\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296795 master-0 kubenswrapper[7480]: I0308 21:57:58.296583 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-trusted-ca-bundle\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296795 master-0 kubenswrapper[7480]: I0308 21:57:58.296602 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-serving-cert\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.296795 master-0 kubenswrapper[7480]: I0308 21:57:58.296622 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-encryption-config\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.307182 master-0 kubenswrapper[7480]: I0308 21:57:58.304384 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-audit-policies\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.307182 master-0 kubenswrapper[7480]: I0308 21:57:58.304453 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5afb146-31d7-4da9-8738-b6c15528233a-audit-dir\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.307182 master-0 kubenswrapper[7480]: I0308 21:57:58.304750 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-serving-ca\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.307182 master-0 kubenswrapper[7480]: I0308 21:57:58.304774 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-trusted-ca-bundle\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.307182 master-0 kubenswrapper[7480]: I0308 21:57:58.306931 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-encryption-config\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.309580 master-0 kubenswrapper[7480]: I0308 21:57:58.309365 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-serving-cert\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.314057 master-0 kubenswrapper[7480]: I0308 21:57:58.312932 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-client\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.349447 master-0 kubenswrapper[7480]: I0308 21:57:58.349384 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvp5b\" (UniqueName: \"kubernetes.io/projected/a5afb146-31d7-4da9-8738-b6c15528233a-kube-api-access-mvp5b\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.363788 master-0 kubenswrapper[7480]: I0308 21:57:58.357448 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qdc2p"] Mar 08 21:57:58.363788 master-0 kubenswrapper[7480]: I0308 21:57:58.357929 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.397585 master-0 kubenswrapper[7480]: I0308 21:57:58.394878 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=7.394853992 podStartE2EDuration="7.394853992s" podCreationTimestamp="2026-03-08 21:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:58.393052514 +0000 UTC m=+28.846673126" watchObservedRunningTime="2026-03-08 21:57:58.394853992 +0000 UTC m=+28.848474614" Mar 08 21:57:58.400200 master-0 kubenswrapper[7480]: I0308 21:57:58.399543 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:58.404972 master-0 kubenswrapper[7480]: I0308 21:57:58.404926 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:58.424676 master-0 kubenswrapper[7480]: I0308 21:57:58.424598 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=7.424580014 podStartE2EDuration="7.424580014s" podCreationTimestamp="2026-03-08 21:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:58.42366462 +0000 UTC m=+28.877285242" watchObservedRunningTime="2026-03-08 21:57:58.424580014 +0000 UTC m=+28.878200616" Mar 08 21:57:58.432127 master-0 kubenswrapper[7480]: I0308 21:57:58.431726 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:57:58.535783 master-0 kubenswrapper[7480]: I0308 21:57:58.535163 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-hosts-file\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.535783 master-0 kubenswrapper[7480]: I0308 21:57:58.535246 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2lv\" (UniqueName: \"kubernetes.io/projected/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-kube-api-access-jb2lv\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.636169 master-0 kubenswrapper[7480]: I0308 21:57:58.636064 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-hosts-file\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.636265 master-0 kubenswrapper[7480]: I0308 21:57:58.636173 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2lv\" (UniqueName: \"kubernetes.io/projected/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-kube-api-access-jb2lv\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.637169 master-0 kubenswrapper[7480]: I0308 21:57:58.637015 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-hosts-file\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.652593 master-0 kubenswrapper[7480]: I0308 21:57:58.647314 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-65ts8" Mar 08 21:57:58.656090 master-0 kubenswrapper[7480]: I0308 21:57:58.656049 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2lv\" (UniqueName: \"kubernetes.io/projected/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-kube-api-access-jb2lv\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.700091 master-0 kubenswrapper[7480]: I0308 21:57:58.688300 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qdc2p" Mar 08 21:57:58.723061 master-0 kubenswrapper[7480]: I0308 21:57:58.712542 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6bf768964c-srxfg"] Mar 08 21:57:58.723061 master-0 kubenswrapper[7480]: W0308 21:57:58.720220 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod669ef8c8_8a32_4ebd_acc4_e8b2b45286a0.slice/crio-b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e WatchSource:0}: Error finding container b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e: Status 404 returned error can't find the container with id b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e Mar 08 21:57:58.745189 master-0 kubenswrapper[7480]: W0308 21:57:58.744676 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5afb146_31d7_4da9_8738_b6c15528233a.slice/crio-5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573 WatchSource:0}: Error finding container 5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573: Status 404 returned error can't find the container with id 5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573 Mar 08 21:57:59.081306 master-0 kubenswrapper[7480]: I0308 21:57:59.081243 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-65ts8"] Mar 08 21:57:59.272303 master-0 kubenswrapper[7480]: I0308 21:57:59.272166 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65ts8" event={"ID":"0cb21214-292a-48ee-85e2-6b1e62f40cb4","Type":"ContainerStarted","Data":"940096d4a40b7dc6434a7295ac74e546aac8e0fdcf673fbbc4587227bf159807"} Mar 08 21:57:59.274237 master-0 kubenswrapper[7480]: I0308 21:57:59.273736 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qdc2p" event={"ID":"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0","Type":"ContainerStarted","Data":"47f807e9d5285fce2274947f7a4eb45b2a4ed3581af2b6bd9b5fbd35c5540072"} Mar 08 21:57:59.274237 master-0 kubenswrapper[7480]: I0308 21:57:59.273759 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qdc2p" event={"ID":"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0","Type":"ContainerStarted","Data":"b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e"} Mar 08 21:57:59.275903 master-0 kubenswrapper[7480]: I0308 21:57:59.275887 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" event={"ID":"a5afb146-31d7-4da9-8738-b6c15528233a","Type":"ContainerStarted","Data":"5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573"} Mar 08 21:57:59.808207 master-0 kubenswrapper[7480]: I0308 21:57:59.808123 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qdc2p" podStartSLOduration=1.808101672 podStartE2EDuration="1.808101672s" podCreationTimestamp="2026-03-08 21:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:57:59.289288258 +0000 UTC m=+29.742908860" watchObservedRunningTime="2026-03-08 21:57:59.808101672 +0000 UTC m=+30.261722274" Mar 08 21:58:00.837500 master-0 kubenswrapper[7480]: I0308 21:58:00.835417 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv"] Mar 08 21:58:00.837500 master-0 kubenswrapper[7480]: I0308 21:58:00.836121 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:00.841254 master-0 kubenswrapper[7480]: I0308 21:58:00.841090 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 21:58:00.842817 master-0 kubenswrapper[7480]: I0308 21:58:00.842579 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 21:58:00.849339 master-0 kubenswrapper[7480]: I0308 21:58:00.849300 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 21:58:00.851936 master-0 kubenswrapper[7480]: I0308 21:58:00.850563 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 21:58:00.857094 master-0 kubenswrapper[7480]: I0308 21:58:00.855364 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv"] Mar 08 21:58:00.932648 master-0 kubenswrapper[7480]: I0308 21:58:00.932603 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294"] Mar 08 21:58:00.933641 master-0 kubenswrapper[7480]: I0308 21:58:00.933624 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:00.938706 master-0 kubenswrapper[7480]: I0308 21:58:00.936544 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 21:58:00.938706 master-0 kubenswrapper[7480]: I0308 21:58:00.937021 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 21:58:00.938706 master-0 kubenswrapper[7480]: I0308 21:58:00.937273 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 21:58:00.954192 master-0 kubenswrapper[7480]: I0308 21:58:00.954154 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294"] Mar 08 21:58:00.971279 master-0 kubenswrapper[7480]: I0308 21:58:00.971210 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:00.971279 master-0 kubenswrapper[7480]: I0308 21:58:00.971279 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftn6p\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-kube-api-access-ftn6p\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:00.971548 master-0 kubenswrapper[7480]: I0308 21:58:00.971336 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:00.971548 master-0 kubenswrapper[7480]: I0308 21:58:00.971364 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:00.971548 master-0 kubenswrapper[7480]: I0308 21:58:00.971391 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:00.971548 master-0 kubenswrapper[7480]: I0308 21:58:00.971447 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.072158 master-0 kubenswrapper[7480]: I0308 21:58:01.072048 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.072439 master-0 kubenswrapper[7480]: I0308 21:58:01.072267 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp26r\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-kube-api-access-mp26r\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.072439 master-0 kubenswrapper[7480]: I0308 21:58:01.072409 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.072535 master-0 kubenswrapper[7480]: I0308 21:58:01.072487 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/077643a2-ab2d-4f12-9abf-42a1c56d7aff-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.072706 master-0 kubenswrapper[7480]: I0308 21:58:01.072674 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.072802 master-0 kubenswrapper[7480]: I0308 21:58:01.072771 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.072859 master-0 kubenswrapper[7480]: I0308 21:58:01.072838 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.072899 master-0 kubenswrapper[7480]: I0308 21:58:01.072857 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.073146 master-0 kubenswrapper[7480]: I0308 21:58:01.073098 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftn6p\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-kube-api-access-ftn6p\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.073246 master-0 kubenswrapper[7480]: I0308 21:58:01.073226 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.073286 master-0 kubenswrapper[7480]: I0308 21:58:01.073253 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.073318 master-0 kubenswrapper[7480]: I0308 21:58:01.073293 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.073351 master-0 kubenswrapper[7480]: I0308 21:58:01.073309 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.074986 master-0 kubenswrapper[7480]: I0308 21:58:01.073540 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.174926 master-0 kubenswrapper[7480]: I0308 21:58:01.174735 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.174926 master-0 kubenswrapper[7480]: I0308 21:58:01.174807 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp26r\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-kube-api-access-mp26r\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.174926 master-0 kubenswrapper[7480]: I0308 21:58:01.174839 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/077643a2-ab2d-4f12-9abf-42a1c56d7aff-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.174926 master-0 kubenswrapper[7480]: I0308 21:58:01.174879 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.174926 master-0 kubenswrapper[7480]: I0308 21:58:01.174912 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.175497 master-0 kubenswrapper[7480]: I0308 21:58:01.175083 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.175497 master-0 kubenswrapper[7480]: E0308 21:58:01.175209 7480 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Mar 08 21:58:01.175497 master-0 kubenswrapper[7480]: E0308 21:58:01.175233 7480 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294: configmap "operator-controller-trusted-ca-bundle" not found Mar 08 21:58:01.175497 master-0 kubenswrapper[7480]: E0308 21:58:01.175294 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs podName:077643a2-ab2d-4f12-9abf-42a1c56d7aff nodeName:}" failed. No retries permitted until 2026-03-08 21:58:01.675273905 +0000 UTC m=+32.128894507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs") pod "operator-controller-controller-manager-6598bfb6c4-nk294" (UID: "077643a2-ab2d-4f12-9abf-42a1c56d7aff") : configmap "operator-controller-trusted-ca-bundle" not found Mar 08 21:58:01.177356 master-0 kubenswrapper[7480]: I0308 21:58:01.177313 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.178507 master-0 kubenswrapper[7480]: I0308 21:58:01.178449 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/077643a2-ab2d-4f12-9abf-42a1c56d7aff-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.201231 master-0 kubenswrapper[7480]: I0308 21:58:01.201017 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftn6p\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-kube-api-access-ftn6p\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.201730 master-0 kubenswrapper[7480]: I0308 21:58:01.201671 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.202972 master-0 kubenswrapper[7480]: I0308 21:58:01.202926 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp26r\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-kube-api-access-mp26r\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.205553 master-0 kubenswrapper[7480]: I0308 21:58:01.205516 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.497829 master-0 kubenswrapper[7480]: I0308 21:58:01.497647 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:01.666731 master-0 kubenswrapper[7480]: I0308 21:58:01.666430 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 21:58:01.667312 master-0 kubenswrapper[7480]: I0308 21:58:01.666683 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" containerName="installer" containerID="cri-o://00da65f85d6a396bd144d8af9fedcda14ea9c9016de2176d13648b00d0ef6d29" gracePeriod=30 Mar 08 21:58:01.680527 master-0 kubenswrapper[7480]: I0308 21:58:01.680470 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.690181 master-0 kubenswrapper[7480]: I0308 21:58:01.689913 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:01.859252 master-0 kubenswrapper[7480]: I0308 21:58:01.859150 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:02.296626 master-0 kubenswrapper[7480]: I0308 21:58:02.296024 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8"] Mar 08 21:58:02.296626 master-0 kubenswrapper[7480]: I0308 21:58:02.296348 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" podUID="d287e2ca-f134-4e34-96f7-50a3055ee119" containerName="cluster-version-operator" containerID="cri-o://8d516b9f38991558f05c5da2875d325fa5984b9cedd39d8165f024180e98bc7a" gracePeriod=130 Mar 08 21:58:02.595343 master-0 kubenswrapper[7480]: I0308 21:58:02.595277 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:58:02.595602 master-0 kubenswrapper[7480]: I0308 21:58:02.595362 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:58:02.595708 master-0 kubenswrapper[7480]: I0308 21:58:02.595658 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:58:02.599681 master-0 kubenswrapper[7480]: I0308 21:58:02.599598 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:58:02.599681 master-0 kubenswrapper[7480]: I0308 21:58:02.599652 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:58:02.600835 master-0 kubenswrapper[7480]: I0308 21:58:02.600764 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:58:02.831146 master-0 kubenswrapper[7480]: I0308 21:58:02.831034 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:58:02.831626 master-0 kubenswrapper[7480]: I0308 21:58:02.831601 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:58:02.833758 master-0 kubenswrapper[7480]: I0308 21:58:02.833686 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:58:03.297506 master-0 kubenswrapper[7480]: I0308 21:58:03.297322 7480 generic.go:334] "Generic (PLEG): container finished" podID="d287e2ca-f134-4e34-96f7-50a3055ee119" containerID="8d516b9f38991558f05c5da2875d325fa5984b9cedd39d8165f024180e98bc7a" exitCode=0 Mar 08 21:58:03.297506 master-0 kubenswrapper[7480]: I0308 21:58:03.297378 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" event={"ID":"d287e2ca-f134-4e34-96f7-50a3055ee119","Type":"ContainerDied","Data":"8d516b9f38991558f05c5da2875d325fa5984b9cedd39d8165f024180e98bc7a"} Mar 08 21:58:03.730787 master-0 kubenswrapper[7480]: I0308 21:58:03.730736 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:58:03.910718 master-0 kubenswrapper[7480]: I0308 21:58:03.910625 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") pod \"d287e2ca-f134-4e34-96f7-50a3055ee119\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " Mar 08 21:58:03.911016 master-0 kubenswrapper[7480]: I0308 21:58:03.910759 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") pod \"d287e2ca-f134-4e34-96f7-50a3055ee119\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " Mar 08 21:58:03.911016 master-0 kubenswrapper[7480]: I0308 21:58:03.910869 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") pod \"d287e2ca-f134-4e34-96f7-50a3055ee119\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " Mar 08 21:58:03.911016 master-0 kubenswrapper[7480]: I0308 21:58:03.910874 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "d287e2ca-f134-4e34-96f7-50a3055ee119" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:03.911016 master-0 kubenswrapper[7480]: I0308 21:58:03.910955 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") pod \"d287e2ca-f134-4e34-96f7-50a3055ee119\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " Mar 08 21:58:03.911016 master-0 kubenswrapper[7480]: I0308 21:58:03.911006 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") pod \"d287e2ca-f134-4e34-96f7-50a3055ee119\" (UID: \"d287e2ca-f134-4e34-96f7-50a3055ee119\") " Mar 08 21:58:03.911262 master-0 kubenswrapper[7480]: I0308 21:58:03.911112 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "d287e2ca-f134-4e34-96f7-50a3055ee119" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:03.911393 master-0 kubenswrapper[7480]: I0308 21:58:03.911352 7480 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:03.911393 master-0 kubenswrapper[7480]: I0308 21:58:03.911388 7480 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d287e2ca-f134-4e34-96f7-50a3055ee119-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:03.911966 master-0 kubenswrapper[7480]: I0308 21:58:03.911875 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca" (OuterVolumeSpecName: "service-ca") pod "d287e2ca-f134-4e34-96f7-50a3055ee119" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:58:03.915116 master-0 kubenswrapper[7480]: I0308 21:58:03.915022 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d287e2ca-f134-4e34-96f7-50a3055ee119" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:58:03.922757 master-0 kubenswrapper[7480]: I0308 21:58:03.922355 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d287e2ca-f134-4e34-96f7-50a3055ee119" (UID: "d287e2ca-f134-4e34-96f7-50a3055ee119"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:04.012321 master-0 kubenswrapper[7480]: I0308 21:58:04.012234 7480 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d287e2ca-f134-4e34-96f7-50a3055ee119-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:04.012321 master-0 kubenswrapper[7480]: I0308 21:58:04.012302 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d287e2ca-f134-4e34-96f7-50a3055ee119-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:04.012321 master-0 kubenswrapper[7480]: I0308 21:58:04.012325 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d287e2ca-f134-4e34-96f7-50a3055ee119-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:04.269548 master-0 kubenswrapper[7480]: I0308 21:58:04.269352 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 21:58:04.269790 master-0 kubenswrapper[7480]: E0308 21:58:04.269637 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d287e2ca-f134-4e34-96f7-50a3055ee119" containerName="cluster-version-operator" Mar 08 21:58:04.269790 master-0 kubenswrapper[7480]: I0308 21:58:04.269657 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="d287e2ca-f134-4e34-96f7-50a3055ee119" containerName="cluster-version-operator" Mar 08 21:58:04.269790 master-0 kubenswrapper[7480]: I0308 21:58:04.269775 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="d287e2ca-f134-4e34-96f7-50a3055ee119" containerName="cluster-version-operator" Mar 08 21:58:04.270361 master-0 kubenswrapper[7480]: I0308 21:58:04.270330 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.281658 master-0 kubenswrapper[7480]: I0308 21:58:04.281577 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 21:58:04.306479 master-0 kubenswrapper[7480]: I0308 21:58:04.306431 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" event={"ID":"d287e2ca-f134-4e34-96f7-50a3055ee119","Type":"ContainerDied","Data":"2d305e2126da2df672b5029a4e5d93937d2fb815ad69e0ad77e8d2f95bf5f7ba"} Mar 08 21:58:04.306921 master-0 kubenswrapper[7480]: I0308 21:58:04.306527 7480 scope.go:117] "RemoveContainer" containerID="8d516b9f38991558f05c5da2875d325fa5984b9cedd39d8165f024180e98bc7a" Mar 08 21:58:04.306921 master-0 kubenswrapper[7480]: I0308 21:58:04.306538 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8" Mar 08 21:58:04.353775 master-0 kubenswrapper[7480]: I0308 21:58:04.353703 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8"] Mar 08 21:58:04.358797 master-0 kubenswrapper[7480]: I0308 21:58:04.357642 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-d8fd8"] Mar 08 21:58:04.393021 master-0 kubenswrapper[7480]: I0308 21:58:04.392946 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2"] Mar 08 21:58:04.393968 master-0 kubenswrapper[7480]: I0308 21:58:04.393939 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.398120 master-0 kubenswrapper[7480]: I0308 21:58:04.397897 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 21:58:04.398276 master-0 kubenswrapper[7480]: I0308 21:58:04.398209 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 21:58:04.398386 master-0 kubenswrapper[7480]: I0308 21:58:04.398362 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 21:58:04.415681 master-0 kubenswrapper[7480]: I0308 21:58:04.415623 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-var-lock\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.417192 master-0 kubenswrapper[7480]: I0308 21:58:04.416065 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.417346 master-0 kubenswrapper[7480]: I0308 21:58:04.417321 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48bbd836-7516-4bc4-9e94-a70026eeacfb-kube-api-access\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518590 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518681 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518714 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518745 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-serving-cert\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518770 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-service-ca\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518805 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518832 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48bbd836-7516-4bc4-9e94-a70026eeacfb-kube-api-access\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518868 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-var-lock\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.518951 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-var-lock\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.519024 master-0 kubenswrapper[7480]: I0308 21:58:04.519003 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.541428 master-0 kubenswrapper[7480]: I0308 21:58:04.541374 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48bbd836-7516-4bc4-9e94-a70026eeacfb-kube-api-access\") pod \"installer-2-master-0\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.620560 master-0 kubenswrapper[7480]: I0308 21:58:04.620274 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.620832 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-serving-cert\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.620921 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-service-ca\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.620988 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.621109 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.621153 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.622605 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.622707 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.626237 master-0 kubenswrapper[7480]: I0308 21:58:04.623053 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-service-ca\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.638953 master-0 kubenswrapper[7480]: I0308 21:58:04.638895 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.646834 master-0 kubenswrapper[7480]: I0308 21:58:04.646796 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-serving-cert\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.718363 master-0 kubenswrapper[7480]: I0308 21:58:04.718193 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 21:58:04.914489 master-0 kubenswrapper[7480]: I0308 21:58:04.914427 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2"] Mar 08 21:58:04.938323 master-0 kubenswrapper[7480]: W0308 21:58:04.936487 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83b5f0b6_adee_4820_8212_b4d182b178d2.slice/crio-1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f WatchSource:0}: Error finding container 1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f: Status 404 returned error can't find the container with id 1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f Mar 08 21:58:05.215615 master-0 kubenswrapper[7480]: I0308 21:58:05.215546 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr"] Mar 08 21:58:05.221132 master-0 kubenswrapper[7480]: I0308 21:58:05.220131 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x"] Mar 08 21:58:05.234107 master-0 kubenswrapper[7480]: W0308 21:58:05.233471 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe431b74_1116_4b0f_8b25_bbb0408411b0.slice/crio-409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461 WatchSource:0}: Error finding container 409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461: Status 404 returned error can't find the container with id 409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461 Mar 08 21:58:05.253657 master-0 kubenswrapper[7480]: I0308 21:58:05.253599 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv"] Mar 08 21:58:05.265788 master-0 kubenswrapper[7480]: W0308 21:58:05.265705 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a91f36f_900e_4b99_9be1_dfc61d8e31d9.slice/crio-d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013 WatchSource:0}: Error finding container d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013: Status 404 returned error can't find the container with id d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013 Mar 08 21:58:05.289148 master-0 kubenswrapper[7480]: I0308 21:58:05.289061 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294"] Mar 08 21:58:05.291207 master-0 kubenswrapper[7480]: I0308 21:58:05.290292 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 21:58:05.326936 master-0 kubenswrapper[7480]: I0308 21:58:05.326893 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" event={"ID":"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0","Type":"ContainerStarted","Data":"5acb1dbbaadd24be1aa51015d4ffabe0583806b310c9bb173c49c064dc0af3d3"} Mar 08 21:58:05.329470 master-0 kubenswrapper[7480]: W0308 21:58:05.329418 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077643a2_ab2d_4f12_9abf_42a1c56d7aff.slice/crio-be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110 WatchSource:0}: Error finding container be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110: Status 404 returned error can't find the container with id be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110 Mar 08 21:58:05.329865 master-0 kubenswrapper[7480]: I0308 21:58:05.329693 7480 generic.go:334] "Generic (PLEG): container finished" podID="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" containerID="25dcfb26438ac1a8e2908fd8e10cac8fb870f8887f8afa80fca87f762351557e" exitCode=0 Mar 08 21:58:05.330115 master-0 kubenswrapper[7480]: I0308 21:58:05.330043 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerDied","Data":"25dcfb26438ac1a8e2908fd8e10cac8fb870f8887f8afa80fca87f762351557e"} Mar 08 21:58:05.345418 master-0 kubenswrapper[7480]: I0308 21:58:05.345192 7480 generic.go:334] "Generic (PLEG): container finished" podID="a5afb146-31d7-4da9-8738-b6c15528233a" containerID="1f70617dd998f936fb35fbf67cf4dddc810c8e16cdc8c2b46a2145b980e52414" exitCode=0 Mar 08 21:58:05.345418 master-0 kubenswrapper[7480]: I0308 21:58:05.345291 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" event={"ID":"a5afb146-31d7-4da9-8738-b6c15528233a","Type":"ContainerDied","Data":"1f70617dd998f936fb35fbf67cf4dddc810c8e16cdc8c2b46a2145b980e52414"} Mar 08 21:58:05.371595 master-0 kubenswrapper[7480]: I0308 21:58:05.363212 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461"} Mar 08 21:58:05.371765 master-0 kubenswrapper[7480]: I0308 21:58:05.371716 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65ts8" event={"ID":"0cb21214-292a-48ee-85e2-6b1e62f40cb4","Type":"ContainerStarted","Data":"1cfcb83edf8c27df479212bb6c499d0187e931da1f4d2c86a1e4b18a2365e17f"} Mar 08 21:58:05.373903 master-0 kubenswrapper[7480]: I0308 21:58:05.373847 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" event={"ID":"83b5f0b6-adee-4820-8212-b4d182b178d2","Type":"ContainerStarted","Data":"1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f"} Mar 08 21:58:05.393428 master-0 kubenswrapper[7480]: I0308 21:58:05.393366 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerStarted","Data":"a22b29816e03690faf00c5c6d5f7ea0b06750cd2c50fe9f666b86154f5e12d55"} Mar 08 21:58:05.393428 master-0 kubenswrapper[7480]: I0308 21:58:05.393418 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerStarted","Data":"d3f24d18018ae4fd0cde9a9605ef8a24287eac4d74c241af3ae19429f61d0495"} Mar 08 21:58:05.396163 master-0 kubenswrapper[7480]: I0308 21:58:05.395544 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" event={"ID":"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d","Type":"ContainerStarted","Data":"8bccabdb4928515f7b56812aa0bca7cb8124c5887acea182d37b4988604c1998"} Mar 08 21:58:05.396163 master-0 kubenswrapper[7480]: I0308 21:58:05.395802 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:58:05.410423 master-0 kubenswrapper[7480]: I0308 21:58:05.409280 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" event={"ID":"6366c13e-beef-4918-991a-33acee9110e1","Type":"ContainerStarted","Data":"b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be"} Mar 08 21:58:05.417253 master-0 kubenswrapper[7480]: I0308 21:58:05.411361 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013"} Mar 08 21:58:05.417253 master-0 kubenswrapper[7480]: I0308 21:58:05.415649 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:58:05.419641 master-0 kubenswrapper[7480]: I0308 21:58:05.419572 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" podStartSLOduration=1.419553576 podStartE2EDuration="1.419553576s" podCreationTimestamp="2026-03-08 21:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:05.419184067 +0000 UTC m=+35.872804689" watchObservedRunningTime="2026-03-08 21:58:05.419553576 +0000 UTC m=+35.873174188" Mar 08 21:58:05.496844 master-0 kubenswrapper[7480]: I0308 21:58:05.496542 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" podStartSLOduration=6.489631336 podStartE2EDuration="13.496518427s" podCreationTimestamp="2026-03-08 21:57:52 +0000 UTC" firstStartedPulling="2026-03-08 21:57:56.70492964 +0000 UTC m=+27.158550242" lastFinishedPulling="2026-03-08 21:58:03.711816741 +0000 UTC m=+34.165437333" observedRunningTime="2026-03-08 21:58:05.493624761 +0000 UTC m=+35.947245363" watchObservedRunningTime="2026-03-08 21:58:05.496518427 +0000 UTC m=+35.950139029" Mar 08 21:58:05.496844 master-0 kubenswrapper[7480]: I0308 21:58:05.496652 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" podStartSLOduration=5.678214816 podStartE2EDuration="13.49664919s" podCreationTimestamp="2026-03-08 21:57:52 +0000 UTC" firstStartedPulling="2026-03-08 21:57:56.684046547 +0000 UTC m=+27.137667149" lastFinishedPulling="2026-03-08 21:58:04.502480921 +0000 UTC m=+34.956101523" observedRunningTime="2026-03-08 21:58:05.45241899 +0000 UTC m=+35.906039592" watchObservedRunningTime="2026-03-08 21:58:05.49664919 +0000 UTC m=+35.950269802" Mar 08 21:58:05.793847 master-0 kubenswrapper[7480]: I0308 21:58:05.793794 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d287e2ca-f134-4e34-96f7-50a3055ee119" path="/var/lib/kubelet/pods/d287e2ca-f134-4e34-96f7-50a3055ee119/volumes" Mar 08 21:58:05.960014 master-0 kubenswrapper[7480]: I0308 21:58:05.959360 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:58:05.966622 master-0 kubenswrapper[7480]: I0308 21:58:05.966590 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:58:06.427317 master-0 kubenswrapper[7480]: I0308 21:58:06.426488 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"57c8aa9b18c347fc77bfc02f5a09149b7844bf09403e274ce81dbd6022c67d26"} Mar 08 21:58:06.429188 master-0 kubenswrapper[7480]: I0308 21:58:06.428744 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" event={"ID":"a5afb146-31d7-4da9-8738-b6c15528233a","Type":"ContainerStarted","Data":"09f644edf932f3c7a117f699d47754e018bad866251462b4281bbbb8c5438352"} Mar 08 21:58:06.430977 master-0 kubenswrapper[7480]: I0308 21:58:06.430932 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"48bbd836-7516-4bc4-9e94-a70026eeacfb","Type":"ContainerStarted","Data":"a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3"} Mar 08 21:58:06.431036 master-0 kubenswrapper[7480]: I0308 21:58:06.430981 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"48bbd836-7516-4bc4-9e94-a70026eeacfb","Type":"ContainerStarted","Data":"91ff50d53f50e62a1073d72e3fdfe439592d027558b9949f54f7b1873fb4eec0"} Mar 08 21:58:06.433471 master-0 kubenswrapper[7480]: I0308 21:58:06.433370 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65ts8" event={"ID":"0cb21214-292a-48ee-85e2-6b1e62f40cb4","Type":"ContainerStarted","Data":"081d0802e3f974aded513159484c54517ae098c48bd0d0fb786272b12257b48b"} Mar 08 21:58:06.434222 master-0 kubenswrapper[7480]: I0308 21:58:06.434167 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-65ts8" Mar 08 21:58:06.437642 master-0 kubenswrapper[7480]: I0308 21:58:06.437608 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerStarted","Data":"2276ccb6b0f5fd08f5e56e3b902e8a6182b2a12013f6e0c332a45427339723ee"} Mar 08 21:58:06.437642 master-0 kubenswrapper[7480]: I0308 21:58:06.437639 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerStarted","Data":"9da1b27d0d2a56f2d1836cb9a7ce90ff6ce0283a3fbf3cce14a836de8ec2bd26"} Mar 08 21:58:06.444701 master-0 kubenswrapper[7480]: I0308 21:58:06.444640 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"60d3d202a39452d626dd6317c7caf06c5f21b7e1a289e0984f94bd5f6ec57f48"} Mar 08 21:58:06.444701 master-0 kubenswrapper[7480]: I0308 21:58:06.444698 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"5946b7f2d9d566068ae07c485f39d2cd8eea56a2d551b41eae667da0ce359cfb"} Mar 08 21:58:06.444803 master-0 kubenswrapper[7480]: I0308 21:58:06.444709 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110"} Mar 08 21:58:06.445645 master-0 kubenswrapper[7480]: I0308 21:58:06.445505 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:06.457108 master-0 kubenswrapper[7480]: I0308 21:58:06.456980 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" podStartSLOduration=2.643326302 podStartE2EDuration="8.456955059s" podCreationTimestamp="2026-03-08 21:57:58 +0000 UTC" firstStartedPulling="2026-03-08 21:57:58.747004845 +0000 UTC m=+29.200625447" lastFinishedPulling="2026-03-08 21:58:04.560633602 +0000 UTC m=+35.014254204" observedRunningTime="2026-03-08 21:58:06.453097918 +0000 UTC m=+36.906718550" watchObservedRunningTime="2026-03-08 21:58:06.456955059 +0000 UTC m=+36.910575661" Mar 08 21:58:06.464840 master-0 kubenswrapper[7480]: I0308 21:58:06.464767 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"69b4132a818df716de03fdd12ebf683c551197394c831d762cb2338396e793c4"} Mar 08 21:58:06.464840 master-0 kubenswrapper[7480]: I0308 21:58:06.464822 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"5166b178c19287374a46a00ef88c5dfe4724a44440d45b1e58c811dacd606607"} Mar 08 21:58:06.474717 master-0 kubenswrapper[7480]: I0308 21:58:06.472967 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podStartSLOduration=6.472945484 podStartE2EDuration="6.472945484s" podCreationTimestamp="2026-03-08 21:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:06.47279363 +0000 UTC m=+36.926414242" watchObservedRunningTime="2026-03-08 21:58:06.472945484 +0000 UTC m=+36.926566086" Mar 08 21:58:06.520109 master-0 kubenswrapper[7480]: I0308 21:58:06.516945 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-65ts8" podStartSLOduration=4.030098692 podStartE2EDuration="9.516910767s" podCreationTimestamp="2026-03-08 21:57:57 +0000 UTC" firstStartedPulling="2026-03-08 21:57:59.093239112 +0000 UTC m=+29.546859714" lastFinishedPulling="2026-03-08 21:58:04.580051197 +0000 UTC m=+35.033671789" observedRunningTime="2026-03-08 21:58:06.489908465 +0000 UTC m=+36.943529067" watchObservedRunningTime="2026-03-08 21:58:06.516910767 +0000 UTC m=+36.970531369" Mar 08 21:58:06.544113 master-0 kubenswrapper[7480]: I0308 21:58:06.539316 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" podStartSLOduration=7.740086565 podStartE2EDuration="15.539290039s" podCreationTimestamp="2026-03-08 21:57:51 +0000 UTC" firstStartedPulling="2026-03-08 21:57:56.736417898 +0000 UTC m=+27.190038490" lastFinishedPulling="2026-03-08 21:58:04.535621362 +0000 UTC m=+34.989241964" observedRunningTime="2026-03-08 21:58:06.517447341 +0000 UTC m=+36.971067943" watchObservedRunningTime="2026-03-08 21:58:06.539290039 +0000 UTC m=+36.992910641" Mar 08 21:58:06.544113 master-0 kubenswrapper[7480]: I0308 21:58:06.539972 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.539967516 podStartE2EDuration="2.539967516s" podCreationTimestamp="2026-03-08 21:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:06.538527198 +0000 UTC m=+36.992147800" watchObservedRunningTime="2026-03-08 21:58:06.539967516 +0000 UTC m=+36.993588118" Mar 08 21:58:06.570110 master-0 kubenswrapper[7480]: I0308 21:58:06.567514 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podStartSLOduration=6.567494642 podStartE2EDuration="6.567494642s" podCreationTimestamp="2026-03-08 21:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:06.564836202 +0000 UTC m=+37.018456814" watchObservedRunningTime="2026-03-08 21:58:06.567494642 +0000 UTC m=+37.021115244" Mar 08 21:58:07.475103 master-0 kubenswrapper[7480]: I0308 21:58:07.473222 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:08.433391 master-0 kubenswrapper[7480]: I0308 21:58:08.433135 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:58:08.433391 master-0 kubenswrapper[7480]: I0308 21:58:08.433202 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:58:08.441593 master-0 kubenswrapper[7480]: I0308 21:58:08.441547 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:58:08.487874 master-0 kubenswrapper[7480]: I0308 21:58:08.487811 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 21:58:08.529150 master-0 kubenswrapper[7480]: I0308 21:58:08.529026 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:58:08.529386 master-0 kubenswrapper[7480]: I0308 21:58:08.529194 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: I0308 21:58:08.539756 7480 patch_prober.go:28] interesting pod/apiserver-6f9445b8fd-w44n6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]log ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]etcd ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/generic-apiserver-start-informers ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/max-in-flight-filter ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/project.openshift.io-projectcache ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/openshift.io-startinformers ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 08 21:58:08.539843 master-0 kubenswrapper[7480]: livez check failed Mar 08 21:58:08.540509 master-0 kubenswrapper[7480]: I0308 21:58:08.539898 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" podUID="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 21:58:10.830290 master-0 kubenswrapper[7480]: I0308 21:58:10.829865 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 21:58:10.830926 master-0 kubenswrapper[7480]: I0308 21:58:10.830788 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:10.836000 master-0 kubenswrapper[7480]: I0308 21:58:10.835946 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 21:58:10.843398 master-0 kubenswrapper[7480]: I0308 21:58:10.843345 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 21:58:10.937137 master-0 kubenswrapper[7480]: I0308 21:58:10.937055 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a9c4d25-8230-4111-b1ad-fd6427c16488-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:10.937368 master-0 kubenswrapper[7480]: I0308 21:58:10.937200 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-var-lock\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:10.937368 master-0 kubenswrapper[7480]: I0308 21:58:10.937227 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.038208 master-0 kubenswrapper[7480]: I0308 21:58:11.038153 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-var-lock\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.038471 master-0 kubenswrapper[7480]: I0308 21:58:11.038310 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-var-lock\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.038471 master-0 kubenswrapper[7480]: I0308 21:58:11.038369 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.038471 master-0 kubenswrapper[7480]: I0308 21:58:11.038323 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.038628 master-0 kubenswrapper[7480]: I0308 21:58:11.038543 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a9c4d25-8230-4111-b1ad-fd6427c16488-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.063342 master-0 kubenswrapper[7480]: I0308 21:58:11.063284 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a9c4d25-8230-4111-b1ad-fd6427c16488-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.217964 master-0 kubenswrapper[7480]: I0308 21:58:11.217582 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:11.502168 master-0 kubenswrapper[7480]: I0308 21:58:11.501203 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 21:58:11.644482 master-0 kubenswrapper[7480]: I0308 21:58:11.644424 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 21:58:11.654365 master-0 kubenswrapper[7480]: W0308 21:58:11.654315 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8a9c4d25_8230_4111_b1ad_fd6427c16488.slice/crio-a70da3d7e0f56ee98fe1de17a4ecc7f84ec0445b52ed29de54a5f11f2f33237d WatchSource:0}: Error finding container a70da3d7e0f56ee98fe1de17a4ecc7f84ec0445b52ed29de54a5f11f2f33237d: Status 404 returned error can't find the container with id a70da3d7e0f56ee98fe1de17a4ecc7f84ec0445b52ed29de54a5f11f2f33237d Mar 08 21:58:11.865307 master-0 kubenswrapper[7480]: I0308 21:58:11.865244 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 21:58:11.865953 master-0 kubenswrapper[7480]: I0308 21:58:11.865541 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 21:58:11.865953 master-0 kubenswrapper[7480]: I0308 21:58:11.865880 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="48bbd836-7516-4bc4-9e94-a70026eeacfb" containerName="installer" containerID="cri-o://a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3" gracePeriod=30 Mar 08 21:58:11.874194 master-0 kubenswrapper[7480]: I0308 21:58:11.872921 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 21:58:12.322862 master-0 kubenswrapper[7480]: I0308 21:58:12.322782 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_48bbd836-7516-4bc4-9e94-a70026eeacfb/installer/0.log" Mar 08 21:58:12.323122 master-0 kubenswrapper[7480]: I0308 21:58:12.323092 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:12.345313 master-0 kubenswrapper[7480]: I0308 21:58:12.345252 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n"] Mar 08 21:58:12.345539 master-0 kubenswrapper[7480]: I0308 21:58:12.345488 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" podUID="14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" containerName="controller-manager" containerID="cri-o://8bccabdb4928515f7b56812aa0bca7cb8124c5887acea182d37b4988604c1998" gracePeriod=30 Mar 08 21:58:12.374803 master-0 kubenswrapper[7480]: I0308 21:58:12.373891 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-var-lock\") pod \"48bbd836-7516-4bc4-9e94-a70026eeacfb\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " Mar 08 21:58:12.374803 master-0 kubenswrapper[7480]: I0308 21:58:12.374235 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-var-lock" (OuterVolumeSpecName: "var-lock") pod "48bbd836-7516-4bc4-9e94-a70026eeacfb" (UID: "48bbd836-7516-4bc4-9e94-a70026eeacfb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:12.383219 master-0 kubenswrapper[7480]: I0308 21:58:12.382507 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq"] Mar 08 21:58:12.383219 master-0 kubenswrapper[7480]: I0308 21:58:12.382728 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" podUID="6366c13e-beef-4918-991a-33acee9110e1" containerName="route-controller-manager" containerID="cri-o://b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be" gracePeriod=30 Mar 08 21:58:12.475929 master-0 kubenswrapper[7480]: I0308 21:58:12.475858 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-kubelet-dir\") pod \"48bbd836-7516-4bc4-9e94-a70026eeacfb\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " Mar 08 21:58:12.476175 master-0 kubenswrapper[7480]: I0308 21:58:12.476044 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48bbd836-7516-4bc4-9e94-a70026eeacfb-kube-api-access\") pod \"48bbd836-7516-4bc4-9e94-a70026eeacfb\" (UID: \"48bbd836-7516-4bc4-9e94-a70026eeacfb\") " Mar 08 21:58:12.476372 master-0 kubenswrapper[7480]: I0308 21:58:12.476348 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:12.476859 master-0 kubenswrapper[7480]: I0308 21:58:12.476799 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "48bbd836-7516-4bc4-9e94-a70026eeacfb" (UID: "48bbd836-7516-4bc4-9e94-a70026eeacfb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:12.494313 master-0 kubenswrapper[7480]: I0308 21:58:12.481582 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bbd836-7516-4bc4-9e94-a70026eeacfb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "48bbd836-7516-4bc4-9e94-a70026eeacfb" (UID: "48bbd836-7516-4bc4-9e94-a70026eeacfb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.520668 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8a9c4d25-8230-4111-b1ad-fd6427c16488","Type":"ContainerStarted","Data":"ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad"} Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.520715 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8a9c4d25-8230-4111-b1ad-fd6427c16488","Type":"ContainerStarted","Data":"a70da3d7e0f56ee98fe1de17a4ecc7f84ec0445b52ed29de54a5f11f2f33237d"} Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.522408 7480 generic.go:334] "Generic (PLEG): container finished" podID="14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" containerID="8bccabdb4928515f7b56812aa0bca7cb8124c5887acea182d37b4988604c1998" exitCode=0 Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.522445 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" event={"ID":"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d","Type":"ContainerDied","Data":"8bccabdb4928515f7b56812aa0bca7cb8124c5887acea182d37b4988604c1998"} Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.523438 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_48bbd836-7516-4bc4-9e94-a70026eeacfb/installer/0.log" Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.523461 7480 generic.go:334] "Generic (PLEG): container finished" podID="48bbd836-7516-4bc4-9e94-a70026eeacfb" containerID="a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3" exitCode=1 Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.523478 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"48bbd836-7516-4bc4-9e94-a70026eeacfb","Type":"ContainerDied","Data":"a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3"} Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.523491 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"48bbd836-7516-4bc4-9e94-a70026eeacfb","Type":"ContainerDied","Data":"91ff50d53f50e62a1073d72e3fdfe439592d027558b9949f54f7b1873fb4eec0"} Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.523507 7480 scope.go:117] "RemoveContainer" containerID="a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3" Mar 08 21:58:12.525524 master-0 kubenswrapper[7480]: I0308 21:58:12.523595 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 21:58:12.558434 master-0 kubenswrapper[7480]: I0308 21:58:12.558065 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.558050058 podStartE2EDuration="2.558050058s" podCreationTimestamp="2026-03-08 21:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:12.557520834 +0000 UTC m=+43.011141436" watchObservedRunningTime="2026-03-08 21:58:12.558050058 +0000 UTC m=+43.011670660" Mar 08 21:58:12.570252 master-0 kubenswrapper[7480]: I0308 21:58:12.568577 7480 scope.go:117] "RemoveContainer" containerID="a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3" Mar 08 21:58:12.578178 master-0 kubenswrapper[7480]: E0308 21:58:12.571210 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3\": container with ID starting with a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3 not found: ID does not exist" containerID="a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3" Mar 08 21:58:12.578178 master-0 kubenswrapper[7480]: I0308 21:58:12.571268 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3"} err="failed to get container status \"a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3\": rpc error: code = NotFound desc = could not find container \"a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3\": container with ID starting with a03f70eb7bb6662196379d09895b74a95ea3e48f201239717bc8a293f70c32d3 not found: ID does not exist" Mar 08 21:58:12.578178 master-0 kubenswrapper[7480]: I0308 21:58:12.577270 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48bbd836-7516-4bc4-9e94-a70026eeacfb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:12.578178 master-0 kubenswrapper[7480]: I0308 21:58:12.577288 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48bbd836-7516-4bc4-9e94-a70026eeacfb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:12.601436 master-0 kubenswrapper[7480]: I0308 21:58:12.599639 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 21:58:12.606378 master-0 kubenswrapper[7480]: I0308 21:58:12.604466 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 21:58:12.878707 master-0 kubenswrapper[7480]: I0308 21:58:12.878565 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:58:12.881902 master-0 kubenswrapper[7480]: I0308 21:58:12.881851 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987147 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-client-ca\") pod \"6366c13e-beef-4918-991a-33acee9110e1\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987223 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-config\") pod \"6366c13e-beef-4918-991a-33acee9110e1\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987264 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-serving-cert\") pod \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987297 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glf9l\" (UniqueName: \"kubernetes.io/projected/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-kube-api-access-glf9l\") pod \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987338 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-client-ca\") pod \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987375 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhv7g\" (UniqueName: \"kubernetes.io/projected/6366c13e-beef-4918-991a-33acee9110e1-kube-api-access-mhv7g\") pod \"6366c13e-beef-4918-991a-33acee9110e1\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987401 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-proxy-ca-bundles\") pod \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987420 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-config\") pod \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\" (UID: \"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d\") " Mar 08 21:58:12.987436 master-0 kubenswrapper[7480]: I0308 21:58:12.987440 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6366c13e-beef-4918-991a-33acee9110e1-serving-cert\") pod \"6366c13e-beef-4918-991a-33acee9110e1\" (UID: \"6366c13e-beef-4918-991a-33acee9110e1\") " Mar 08 21:58:12.988248 master-0 kubenswrapper[7480]: I0308 21:58:12.987697 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-client-ca" (OuterVolumeSpecName: "client-ca") pod "6366c13e-beef-4918-991a-33acee9110e1" (UID: "6366c13e-beef-4918-991a-33acee9110e1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:58:12.988248 master-0 kubenswrapper[7480]: I0308 21:58:12.987907 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-config" (OuterVolumeSpecName: "config") pod "6366c13e-beef-4918-991a-33acee9110e1" (UID: "6366c13e-beef-4918-991a-33acee9110e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:58:12.988321 master-0 kubenswrapper[7480]: I0308 21:58:12.988128 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-client-ca" (OuterVolumeSpecName: "client-ca") pod "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" (UID: "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:58:12.988687 master-0 kubenswrapper[7480]: I0308 21:58:12.988556 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" (UID: "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:58:12.988959 master-0 kubenswrapper[7480]: I0308 21:58:12.988890 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-config" (OuterVolumeSpecName: "config") pod "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" (UID: "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 21:58:12.991670 master-0 kubenswrapper[7480]: I0308 21:58:12.991500 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-kube-api-access-glf9l" (OuterVolumeSpecName: "kube-api-access-glf9l") pod "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" (UID: "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d"). InnerVolumeSpecName "kube-api-access-glf9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:12.991999 master-0 kubenswrapper[7480]: I0308 21:58:12.991929 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6366c13e-beef-4918-991a-33acee9110e1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6366c13e-beef-4918-991a-33acee9110e1" (UID: "6366c13e-beef-4918-991a-33acee9110e1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:58:12.991999 master-0 kubenswrapper[7480]: I0308 21:58:12.991954 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6366c13e-beef-4918-991a-33acee9110e1-kube-api-access-mhv7g" (OuterVolumeSpecName: "kube-api-access-mhv7g") pod "6366c13e-beef-4918-991a-33acee9110e1" (UID: "6366c13e-beef-4918-991a-33acee9110e1"). InnerVolumeSpecName "kube-api-access-mhv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:12.996175 master-0 kubenswrapper[7480]: I0308 21:58:12.995986 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" (UID: "14fda1fb-c1aa-4b0c-a22a-9b65d3be738d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088601 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088650 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088663 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glf9l\" (UniqueName: \"kubernetes.io/projected/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-kube-api-access-glf9l\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088672 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088718 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhv7g\" (UniqueName: \"kubernetes.io/projected/6366c13e-beef-4918-991a-33acee9110e1-kube-api-access-mhv7g\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088727 7480 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088737 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d-config\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088747 7480 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6366c13e-beef-4918-991a-33acee9110e1-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.088718 master-0 kubenswrapper[7480]: I0308 21:58:13.088757 7480 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6366c13e-beef-4918-991a-33acee9110e1-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:13.535216 master-0 kubenswrapper[7480]: I0308 21:58:13.535157 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:58:13.540011 master-0 kubenswrapper[7480]: I0308 21:58:13.539974 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 21:58:13.546886 master-0 kubenswrapper[7480]: I0308 21:58:13.546838 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" event={"ID":"14fda1fb-c1aa-4b0c-a22a-9b65d3be738d","Type":"ContainerDied","Data":"9b4ee3f8afba95786d7e7f99f9f6f2c9cf49a581eb96cff61ba3f8907df4b5b9"} Mar 08 21:58:13.547058 master-0 kubenswrapper[7480]: I0308 21:58:13.546895 7480 scope.go:117] "RemoveContainer" containerID="8bccabdb4928515f7b56812aa0bca7cb8124c5887acea182d37b4988604c1998" Mar 08 21:58:13.547058 master-0 kubenswrapper[7480]: I0308 21:58:13.547005 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n" Mar 08 21:58:13.564356 master-0 kubenswrapper[7480]: I0308 21:58:13.564036 7480 generic.go:334] "Generic (PLEG): container finished" podID="6366c13e-beef-4918-991a-33acee9110e1" containerID="b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be" exitCode=0 Mar 08 21:58:13.566707 master-0 kubenswrapper[7480]: I0308 21:58:13.564862 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" Mar 08 21:58:13.566707 master-0 kubenswrapper[7480]: I0308 21:58:13.565030 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" event={"ID":"6366c13e-beef-4918-991a-33acee9110e1","Type":"ContainerDied","Data":"b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be"} Mar 08 21:58:13.566707 master-0 kubenswrapper[7480]: I0308 21:58:13.565183 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq" event={"ID":"6366c13e-beef-4918-991a-33acee9110e1","Type":"ContainerDied","Data":"50600a8aafbac81fe6228bdc6e1f392621a39a20a9f82da05589d2c77d0ad50e"} Mar 08 21:58:13.587750 master-0 kubenswrapper[7480]: I0308 21:58:13.587695 7480 scope.go:117] "RemoveContainer" containerID="b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be" Mar 08 21:58:13.604912 master-0 kubenswrapper[7480]: I0308 21:58:13.604815 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k"] Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: E0308 21:58:13.605023 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6366c13e-beef-4918-991a-33acee9110e1" containerName="route-controller-manager" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: I0308 21:58:13.605036 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="6366c13e-beef-4918-991a-33acee9110e1" containerName="route-controller-manager" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: E0308 21:58:13.605052 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" containerName="controller-manager" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: I0308 21:58:13.605057 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" containerName="controller-manager" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: E0308 21:58:13.605066 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bbd836-7516-4bc4-9e94-a70026eeacfb" containerName="installer" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: I0308 21:58:13.605076 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bbd836-7516-4bc4-9e94-a70026eeacfb" containerName="installer" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: I0308 21:58:13.605170 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="6366c13e-beef-4918-991a-33acee9110e1" containerName="route-controller-manager" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: I0308 21:58:13.605187 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bbd836-7516-4bc4-9e94-a70026eeacfb" containerName="installer" Mar 08 21:58:13.605187 master-0 kubenswrapper[7480]: I0308 21:58:13.605199 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" containerName="controller-manager" Mar 08 21:58:13.605622 master-0 kubenswrapper[7480]: I0308 21:58:13.605588 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.611183 master-0 kubenswrapper[7480]: I0308 21:58:13.611078 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f7df5f5b-txsrq"] Mar 08 21:58:13.612263 master-0 kubenswrapper[7480]: I0308 21:58:13.612221 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.612555 master-0 kubenswrapper[7480]: I0308 21:58:13.612511 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 21:58:13.622714 master-0 kubenswrapper[7480]: I0308 21:58:13.620157 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 21:58:13.622714 master-0 kubenswrapper[7480]: I0308 21:58:13.620654 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 21:58:13.622714 master-0 kubenswrapper[7480]: I0308 21:58:13.620980 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 21:58:13.622714 master-0 kubenswrapper[7480]: I0308 21:58:13.621323 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 21:58:13.622714 master-0 kubenswrapper[7480]: I0308 21:58:13.621512 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 21:58:13.622714 master-0 kubenswrapper[7480]: I0308 21:58:13.621633 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 21:58:13.623385 master-0 kubenswrapper[7480]: I0308 21:58:13.623357 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 21:58:13.623476 master-0 kubenswrapper[7480]: I0308 21:58:13.623446 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 21:58:13.623476 master-0 kubenswrapper[7480]: I0308 21:58:13.623472 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 21:58:13.625011 master-0 kubenswrapper[7480]: I0308 21:58:13.624980 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 21:58:13.632161 master-0 kubenswrapper[7480]: I0308 21:58:13.631360 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f7df5f5b-txsrq"] Mar 08 21:58:13.659516 master-0 kubenswrapper[7480]: I0308 21:58:13.659182 7480 scope.go:117] "RemoveContainer" containerID="b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be" Mar 08 21:58:13.659516 master-0 kubenswrapper[7480]: I0308 21:58:13.659288 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k"] Mar 08 21:58:13.659680 master-0 kubenswrapper[7480]: E0308 21:58:13.659622 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be\": container with ID starting with b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be not found: ID does not exist" containerID="b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be" Mar 08 21:58:13.659825 master-0 kubenswrapper[7480]: I0308 21:58:13.659672 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be"} err="failed to get container status \"b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be\": rpc error: code = NotFound desc = could not find container \"b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be\": container with ID starting with b39e026848b75146bec034ccac251732900c2c5808d7d8a44b5421d1189232be not found: ID does not exist" Mar 08 21:58:13.669765 master-0 kubenswrapper[7480]: I0308 21:58:13.661298 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq"] Mar 08 21:58:13.669765 master-0 kubenswrapper[7480]: I0308 21:58:13.661327 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6584845c9c-w4jhq"] Mar 08 21:58:13.701257 master-0 kubenswrapper[7480]: I0308 21:58:13.701215 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.701543 master-0 kubenswrapper[7480]: I0308 21:58:13.701525 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.701635 master-0 kubenswrapper[7480]: I0308 21:58:13.701621 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.701707 master-0 kubenswrapper[7480]: I0308 21:58:13.701695 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.701789 master-0 kubenswrapper[7480]: I0308 21:58:13.701777 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.701865 master-0 kubenswrapper[7480]: I0308 21:58:13.701854 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.701969 master-0 kubenswrapper[7480]: I0308 21:58:13.701952 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.702081 master-0 kubenswrapper[7480]: I0308 21:58:13.702068 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.702223 master-0 kubenswrapper[7480]: I0308 21:58:13.702200 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.730671 master-0 kubenswrapper[7480]: I0308 21:58:13.728115 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n"] Mar 08 21:58:13.734819 master-0 kubenswrapper[7480]: I0308 21:58:13.731247 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bf6f788bb-vmt9n"] Mar 08 21:58:13.792632 master-0 kubenswrapper[7480]: I0308 21:58:13.792474 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fda1fb-c1aa-4b0c-a22a-9b65d3be738d" path="/var/lib/kubelet/pods/14fda1fb-c1aa-4b0c-a22a-9b65d3be738d/volumes" Mar 08 21:58:13.793711 master-0 kubenswrapper[7480]: I0308 21:58:13.793676 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bbd836-7516-4bc4-9e94-a70026eeacfb" path="/var/lib/kubelet/pods/48bbd836-7516-4bc4-9e94-a70026eeacfb/volumes" Mar 08 21:58:13.796443 master-0 kubenswrapper[7480]: I0308 21:58:13.796054 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6366c13e-beef-4918-991a-33acee9110e1" path="/var/lib/kubelet/pods/6366c13e-beef-4918-991a-33acee9110e1/volumes" Mar 08 21:58:13.804504 master-0 kubenswrapper[7480]: I0308 21:58:13.804410 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.804504 master-0 kubenswrapper[7480]: I0308 21:58:13.804463 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.804679 master-0 kubenswrapper[7480]: I0308 21:58:13.804602 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.804679 master-0 kubenswrapper[7480]: I0308 21:58:13.804674 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.804742 master-0 kubenswrapper[7480]: I0308 21:58:13.804726 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.804824 master-0 kubenswrapper[7480]: I0308 21:58:13.804787 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.805354 master-0 kubenswrapper[7480]: I0308 21:58:13.805331 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.805547 master-0 kubenswrapper[7480]: I0308 21:58:13.805363 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.805547 master-0 kubenswrapper[7480]: I0308 21:58:13.805387 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.807421 master-0 kubenswrapper[7480]: I0308 21:58:13.807368 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.807791 master-0 kubenswrapper[7480]: I0308 21:58:13.807721 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.808414 master-0 kubenswrapper[7480]: I0308 21:58:13.808381 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.808555 master-0 kubenswrapper[7480]: I0308 21:58:13.808523 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.810608 master-0 kubenswrapper[7480]: I0308 21:58:13.809654 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.810608 master-0 kubenswrapper[7480]: I0308 21:58:13.810013 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.828807 master-0 kubenswrapper[7480]: I0308 21:58:13.826890 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.834613 master-0 kubenswrapper[7480]: I0308 21:58:13.834555 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:13.836992 master-0 kubenswrapper[7480]: I0308 21:58:13.836939 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.972102 master-0 kubenswrapper[7480]: I0308 21:58:13.960637 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:13.991765 master-0 kubenswrapper[7480]: I0308 21:58:13.991701 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:14.067723 master-0 kubenswrapper[7480]: I0308 21:58:14.067683 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 21:58:14.071477 master-0 kubenswrapper[7480]: I0308 21:58:14.070436 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.090469 master-0 kubenswrapper[7480]: I0308 21:58:14.090397 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 21:58:14.211363 master-0 kubenswrapper[7480]: I0308 21:58:14.211276 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.211363 master-0 kubenswrapper[7480]: I0308 21:58:14.211343 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c633355a-b323-4458-8ecb-1e490d115f59-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.211588 master-0 kubenswrapper[7480]: I0308 21:58:14.211502 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-var-lock\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.313024 master-0 kubenswrapper[7480]: I0308 21:58:14.312460 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.313024 master-0 kubenswrapper[7480]: I0308 21:58:14.312515 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c633355a-b323-4458-8ecb-1e490d115f59-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.313024 master-0 kubenswrapper[7480]: I0308 21:58:14.312564 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-var-lock\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.313024 master-0 kubenswrapper[7480]: I0308 21:58:14.312582 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.313024 master-0 kubenswrapper[7480]: I0308 21:58:14.312620 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-var-lock\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.334680 master-0 kubenswrapper[7480]: I0308 21:58:14.334541 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c633355a-b323-4458-8ecb-1e490d115f59-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:14.399199 master-0 kubenswrapper[7480]: I0308 21:58:14.399121 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 21:58:15.954853 master-0 kubenswrapper[7480]: I0308 21:58:15.954749 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 08 21:58:15.958206 master-0 kubenswrapper[7480]: I0308 21:58:15.958167 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:15.964525 master-0 kubenswrapper[7480]: I0308 21:58:15.964452 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 21:58:15.972137 master-0 kubenswrapper[7480]: I0308 21:58:15.970686 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 08 21:58:16.043197 master-0 kubenswrapper[7480]: I0308 21:58:16.043115 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.043197 master-0 kubenswrapper[7480]: I0308 21:58:16.043172 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.043472 master-0 kubenswrapper[7480]: I0308 21:58:16.043428 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-var-lock\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.144639 master-0 kubenswrapper[7480]: I0308 21:58:16.144560 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-var-lock\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.144888 master-0 kubenswrapper[7480]: I0308 21:58:16.144654 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.144888 master-0 kubenswrapper[7480]: I0308 21:58:16.144675 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.145035 master-0 kubenswrapper[7480]: I0308 21:58:16.144970 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.145107 master-0 kubenswrapper[7480]: I0308 21:58:16.145084 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-var-lock\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.161154 master-0 kubenswrapper[7480]: I0308 21:58:16.161053 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.290902 master-0 kubenswrapper[7480]: I0308 21:58:16.290759 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 21:58:16.697836 master-0 kubenswrapper[7480]: I0308 21:58:16.696457 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-65ts8" Mar 08 21:58:19.058892 master-0 kubenswrapper[7480]: I0308 21:58:19.058798 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 21:58:19.061192 master-0 kubenswrapper[7480]: I0308 21:58:19.061128 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 08 21:58:19.338211 master-0 kubenswrapper[7480]: I0308 21:58:19.336364 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k"] Mar 08 21:58:19.338211 master-0 kubenswrapper[7480]: I0308 21:58:19.337064 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f7df5f5b-txsrq"] Mar 08 21:58:19.944510 master-0 kubenswrapper[7480]: W0308 21:58:19.944411 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda51940a_a38f_4baf_9c14_b2f1f46b5aed.slice/crio-49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923 WatchSource:0}: Error finding container 49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923: Status 404 returned error can't find the container with id 49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923 Mar 08 21:58:20.622187 master-0 kubenswrapper[7480]: I0308 21:58:20.622050 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerStarted","Data":"49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923"} Mar 08 21:58:20.624894 master-0 kubenswrapper[7480]: I0308 21:58:20.624841 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerStarted","Data":"556cd17b0dd9a0437b38f51d3f691ed442f4e900ac26991a4d6a0e87a7a93e20"} Mar 08 21:58:20.626018 master-0 kubenswrapper[7480]: I0308 21:58:20.625968 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0","Type":"ContainerStarted","Data":"cd2c2cc51881256bddd6550f01c7b5dafc5dd571e49b29567f752b73ae5dc26c"} Mar 08 21:58:20.626889 master-0 kubenswrapper[7480]: I0308 21:58:20.626844 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c633355a-b323-4458-8ecb-1e490d115f59","Type":"ContainerStarted","Data":"1d3dcf055543df28f3482d4eda49126cfdf056d4ebfa04ae9c5c2b3c8a2fd988"} Mar 08 21:58:20.783923 master-0 kubenswrapper[7480]: I0308 21:58:20.783822 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n"] Mar 08 21:58:20.801437 master-0 kubenswrapper[7480]: I0308 21:58:20.801378 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:20.806181 master-0 kubenswrapper[7480]: I0308 21:58:20.806115 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 21:58:20.806413 master-0 kubenswrapper[7480]: I0308 21:58:20.806373 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-j75vf" Mar 08 21:58:20.806504 master-0 kubenswrapper[7480]: I0308 21:58:20.806413 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 21:58:20.806939 master-0 kubenswrapper[7480]: I0308 21:58:20.806906 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 21:58:20.927654 master-0 kubenswrapper[7480]: I0308 21:58:20.927600 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlqz\" (UniqueName: \"kubernetes.io/projected/6eb502a1-db10-46ba-b698-461919464fb9-kube-api-access-sjlqz\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:20.927779 master-0 kubenswrapper[7480]: I0308 21:58:20.927686 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:21.030728 master-0 kubenswrapper[7480]: I0308 21:58:21.029592 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlqz\" (UniqueName: \"kubernetes.io/projected/6eb502a1-db10-46ba-b698-461919464fb9-kube-api-access-sjlqz\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:21.030728 master-0 kubenswrapper[7480]: I0308 21:58:21.029646 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:21.033976 master-0 kubenswrapper[7480]: I0308 21:58:21.033913 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:21.190125 master-0 kubenswrapper[7480]: I0308 21:58:21.187586 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n"] Mar 08 21:58:21.634186 master-0 kubenswrapper[7480]: I0308 21:58:21.634090 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" event={"ID":"83b5f0b6-adee-4820-8212-b4d182b178d2","Type":"ContainerStarted","Data":"ba2aacb0c56514dfd295769df8f772a329a5770387b5ffe2e5f133aa557b52d6"} Mar 08 21:58:21.636269 master-0 kubenswrapper[7480]: I0308 21:58:21.635979 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" event={"ID":"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0","Type":"ContainerStarted","Data":"948426f8a7e9fc8067b2b637e9391c90e32f58271131d74f32119f667f74e79b"} Mar 08 21:58:21.636269 master-0 kubenswrapper[7480]: I0308 21:58:21.636252 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:58:21.638234 master-0 kubenswrapper[7480]: I0308 21:58:21.638170 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerStarted","Data":"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39"} Mar 08 21:58:21.639162 master-0 kubenswrapper[7480]: I0308 21:58:21.639110 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:21.643192 master-0 kubenswrapper[7480]: I0308 21:58:21.642187 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerStarted","Data":"04d2e0520d46f0208b4f81730f6d539f9f11e470a035dc08dbf06867ed1a4e14"} Mar 08 21:58:21.643192 master-0 kubenswrapper[7480]: I0308 21:58:21.642434 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:21.644239 master-0 kubenswrapper[7480]: I0308 21:58:21.644156 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0","Type":"ContainerStarted","Data":"23ca4cac0c50a9d156ec6ed1b11f780e700b2306444f16b3646285a8a0f6b21b"} Mar 08 21:58:21.645008 master-0 kubenswrapper[7480]: I0308 21:58:21.644969 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 21:58:21.646209 master-0 kubenswrapper[7480]: I0308 21:58:21.646149 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c633355a-b323-4458-8ecb-1e490d115f59","Type":"ContainerStarted","Data":"28682516e11b7da515d28696337779453c2c96bd4cf9bfd8a8b3aa00aef7307b"} Mar 08 21:58:21.648670 master-0 kubenswrapper[7480]: I0308 21:58:21.648621 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 21:58:22.210313 master-0 kubenswrapper[7480]: I0308 21:58:22.208754 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlqz\" (UniqueName: \"kubernetes.io/projected/6eb502a1-db10-46ba-b698-461919464fb9-kube-api-access-sjlqz\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:22.245104 master-0 kubenswrapper[7480]: I0308 21:58:22.243497 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 21:58:22.287108 master-0 kubenswrapper[7480]: I0308 21:58:22.285914 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc"] Mar 08 21:58:22.287108 master-0 kubenswrapper[7480]: I0308 21:58:22.286656 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.297562 master-0 kubenswrapper[7480]: I0308 21:58:22.297521 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 21:58:22.297849 master-0 kubenswrapper[7480]: I0308 21:58:22.297744 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 21:58:22.297902 master-0 kubenswrapper[7480]: I0308 21:58:22.297887 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 21:58:22.298061 master-0 kubenswrapper[7480]: I0308 21:58:22.298038 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 21:58:22.303111 master-0 kubenswrapper[7480]: I0308 21:58:22.302407 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 21:58:22.320146 master-0 kubenswrapper[7480]: I0308 21:58:22.317583 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podStartSLOduration=10.317563592 podStartE2EDuration="10.317563592s" podCreationTimestamp="2026-03-08 21:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:22.316893424 +0000 UTC m=+52.770514026" watchObservedRunningTime="2026-03-08 21:58:22.317563592 +0000 UTC m=+52.771184224" Mar 08 21:58:22.352845 master-0 kubenswrapper[7480]: I0308 21:58:22.350439 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 21:58:22.352845 master-0 kubenswrapper[7480]: I0308 21:58:22.352704 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnsn\" (UniqueName: \"kubernetes.io/projected/2c2c4964-678e-46ac-a500-8efc6b8255d9-kube-api-access-lgnsn\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.352845 master-0 kubenswrapper[7480]: I0308 21:58:22.352786 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-auth-proxy-config\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.352845 master-0 kubenswrapper[7480]: I0308 21:58:22.352834 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-config\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.352845 master-0 kubenswrapper[7480]: I0308 21:58:22.352863 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c2c4964-678e-46ac-a500-8efc6b8255d9-machine-approver-tls\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.416108 master-0 kubenswrapper[7480]: I0308 21:58:22.414918 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=8.414891261 podStartE2EDuration="8.414891261s" podCreationTimestamp="2026-03-08 21:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:22.378246899 +0000 UTC m=+52.831867511" watchObservedRunningTime="2026-03-08 21:58:22.414891261 +0000 UTC m=+52.868511863" Mar 08 21:58:22.457105 master-0 kubenswrapper[7480]: I0308 21:58:22.453447 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c2c4964-678e-46ac-a500-8efc6b8255d9-machine-approver-tls\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.457105 master-0 kubenswrapper[7480]: I0308 21:58:22.453539 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnsn\" (UniqueName: \"kubernetes.io/projected/2c2c4964-678e-46ac-a500-8efc6b8255d9-kube-api-access-lgnsn\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.457105 master-0 kubenswrapper[7480]: I0308 21:58:22.453568 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-auth-proxy-config\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.457105 master-0 kubenswrapper[7480]: I0308 21:58:22.453587 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-config\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.457105 master-0 kubenswrapper[7480]: I0308 21:58:22.454169 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-config\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.457105 master-0 kubenswrapper[7480]: I0308 21:58:22.454724 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-auth-proxy-config\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.475129 master-0 kubenswrapper[7480]: I0308 21:58:22.469754 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" podStartSLOduration=10.469713996 podStartE2EDuration="10.469713996s" podCreationTimestamp="2026-03-08 21:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:22.463911805 +0000 UTC m=+52.917532407" watchObservedRunningTime="2026-03-08 21:58:22.469713996 +0000 UTC m=+52.923334598" Mar 08 21:58:22.475129 master-0 kubenswrapper[7480]: I0308 21:58:22.472169 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c2c4964-678e-46ac-a500-8efc6b8255d9-machine-approver-tls\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.495107 master-0 kubenswrapper[7480]: I0308 21:58:22.494099 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnsn\" (UniqueName: \"kubernetes.io/projected/2c2c4964-678e-46ac-a500-8efc6b8255d9-kube-api-access-lgnsn\") pod \"machine-approver-955fcfb87-tn4pc\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.624156 master-0 kubenswrapper[7480]: I0308 21:58:22.624052 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 21:58:22.691381 master-0 kubenswrapper[7480]: I0308 21:58:22.687815 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"337d76d1f849217e44f712b0d4de222e21178a127e60c214aafe729c50460441"} Mar 08 21:58:22.691381 master-0 kubenswrapper[7480]: I0308 21:58:22.690340 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:58:22.691381 master-0 kubenswrapper[7480]: I0308 21:58:22.690366 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:58:22.698888 master-0 kubenswrapper[7480]: I0308 21:58:22.698839 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 21:58:23.311055 master-0 kubenswrapper[7480]: I0308 21:58:23.310967 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n"] Mar 08 21:58:23.314818 master-0 kubenswrapper[7480]: I0308 21:58:23.314775 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8k5md"] Mar 08 21:58:23.314908 master-0 kubenswrapper[7480]: W0308 21:58:23.314821 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb502a1_db10_46ba_b698_461919464fb9.slice/crio-f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047 WatchSource:0}: Error finding container f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047: Status 404 returned error can't find the container with id f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047 Mar 08 21:58:23.315567 master-0 kubenswrapper[7480]: I0308 21:58:23.315541 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.345829 master-0 kubenswrapper[7480]: I0308 21:58:23.345778 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8k5md"] Mar 08 21:58:23.365246 master-0 kubenswrapper[7480]: I0308 21:58:23.365089 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=8.365048816 podStartE2EDuration="8.365048816s" podCreationTimestamp="2026-03-08 21:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 21:58:23.36445581 +0000 UTC m=+53.818076412" watchObservedRunningTime="2026-03-08 21:58:23.365048816 +0000 UTC m=+53.818669418" Mar 08 21:58:23.377915 master-0 kubenswrapper[7480]: I0308 21:58:23.375299 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtb97\" (UniqueName: \"kubernetes.io/projected/18d5d11d-3d01-448f-b34e-55ebc772f5e8-kube-api-access-xtb97\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.377915 master-0 kubenswrapper[7480]: I0308 21:58:23.375354 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-utilities\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.377915 master-0 kubenswrapper[7480]: I0308 21:58:23.375404 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-catalog-content\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.472524 master-0 kubenswrapper[7480]: I0308 21:58:23.470943 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w7p5f"] Mar 08 21:58:23.472524 master-0 kubenswrapper[7480]: I0308 21:58:23.471954 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.478824 master-0 kubenswrapper[7480]: I0308 21:58:23.478768 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-utilities\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.478986 master-0 kubenswrapper[7480]: I0308 21:58:23.478855 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-catalog-content\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.478986 master-0 kubenswrapper[7480]: I0308 21:58:23.478894 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtb97\" (UniqueName: \"kubernetes.io/projected/18d5d11d-3d01-448f-b34e-55ebc772f5e8-kube-api-access-xtb97\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.479611 master-0 kubenswrapper[7480]: I0308 21:58:23.479557 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-catalog-content\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.479674 master-0 kubenswrapper[7480]: I0308 21:58:23.479635 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-utilities\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.497944 master-0 kubenswrapper[7480]: I0308 21:58:23.497880 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7p5f"] Mar 08 21:58:23.524008 master-0 kubenswrapper[7480]: I0308 21:58:23.523940 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtb97\" (UniqueName: \"kubernetes.io/projected/18d5d11d-3d01-448f-b34e-55ebc772f5e8-kube-api-access-xtb97\") pod \"community-operators-8k5md\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.581930 master-0 kubenswrapper[7480]: I0308 21:58:23.581830 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-utilities\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.582277 master-0 kubenswrapper[7480]: I0308 21:58:23.581953 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8wr\" (UniqueName: \"kubernetes.io/projected/5857b3d0-0865-4ffd-bcc9-3c139c575209-kube-api-access-7c8wr\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.582277 master-0 kubenswrapper[7480]: I0308 21:58:23.581998 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-catalog-content\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.641143 master-0 kubenswrapper[7480]: I0308 21:58:23.640635 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:23.683183 master-0 kubenswrapper[7480]: I0308 21:58:23.683066 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8wr\" (UniqueName: \"kubernetes.io/projected/5857b3d0-0865-4ffd-bcc9-3c139c575209-kube-api-access-7c8wr\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.683400 master-0 kubenswrapper[7480]: I0308 21:58:23.683224 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-catalog-content\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.683400 master-0 kubenswrapper[7480]: I0308 21:58:23.683262 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-utilities\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.683984 master-0 kubenswrapper[7480]: I0308 21:58:23.683912 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-utilities\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.686541 master-0 kubenswrapper[7480]: I0308 21:58:23.684252 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-catalog-content\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.700774 master-0 kubenswrapper[7480]: I0308 21:58:23.700484 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerStarted","Data":"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce"} Mar 08 21:58:23.700774 master-0 kubenswrapper[7480]: I0308 21:58:23.700536 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerStarted","Data":"627ace5b53c8effa9e246bfd6af99dbd08bf8878208542c3b1c00eb2182540ad"} Mar 08 21:58:23.712154 master-0 kubenswrapper[7480]: I0308 21:58:23.711550 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8wr\" (UniqueName: \"kubernetes.io/projected/5857b3d0-0865-4ffd-bcc9-3c139c575209-kube-api-access-7c8wr\") pod \"certified-operators-w7p5f\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.713413 master-0 kubenswrapper[7480]: I0308 21:58:23.713255 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerStarted","Data":"f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047"} Mar 08 21:58:23.807590 master-0 kubenswrapper[7480]: I0308 21:58:23.807423 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:23.991037 master-0 kubenswrapper[7480]: I0308 21:58:23.990965 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8k5md"] Mar 08 21:58:24.404106 master-0 kubenswrapper[7480]: I0308 21:58:24.403994 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7p5f"] Mar 08 21:58:24.437357 master-0 kubenswrapper[7480]: I0308 21:58:24.436843 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 21:58:24.437357 master-0 kubenswrapper[7480]: I0308 21:58:24.437035 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="8a9c4d25-8230-4111-b1ad-fd6427c16488" containerName="installer" containerID="cri-o://ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad" gracePeriod=30 Mar 08 21:58:24.721202 master-0 kubenswrapper[7480]: I0308 21:58:24.721097 7480 generic.go:334] "Generic (PLEG): container finished" podID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerID="f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875" exitCode=0 Mar 08 21:58:24.721704 master-0 kubenswrapper[7480]: I0308 21:58:24.721183 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7p5f" event={"ID":"5857b3d0-0865-4ffd-bcc9-3c139c575209","Type":"ContainerDied","Data":"f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875"} Mar 08 21:58:24.721704 master-0 kubenswrapper[7480]: I0308 21:58:24.721274 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7p5f" event={"ID":"5857b3d0-0865-4ffd-bcc9-3c139c575209","Type":"ContainerStarted","Data":"f0898c70bd4821b7587072ceaf944ff8498ad8e0f03772b1b705ce882893b76c"} Mar 08 21:58:24.724877 master-0 kubenswrapper[7480]: I0308 21:58:24.723580 7480 generic.go:334] "Generic (PLEG): container finished" podID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerID="3b5430452bb2f26a5f4205484f896625833ba1cf6fded222ed84481fe9140384" exitCode=0 Mar 08 21:58:24.724877 master-0 kubenswrapper[7480]: I0308 21:58:24.724497 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k5md" event={"ID":"18d5d11d-3d01-448f-b34e-55ebc772f5e8","Type":"ContainerDied","Data":"3b5430452bb2f26a5f4205484f896625833ba1cf6fded222ed84481fe9140384"} Mar 08 21:58:24.724877 master-0 kubenswrapper[7480]: I0308 21:58:24.724523 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k5md" event={"ID":"18d5d11d-3d01-448f-b34e-55ebc772f5e8","Type":"ContainerStarted","Data":"8ab8f2e9850b184f21d02d18d922bb80d4a105657156f3e3896899fd2c2b2c8d"} Mar 08 21:58:24.862602 master-0 kubenswrapper[7480]: I0308 21:58:24.862545 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrxj"] Mar 08 21:58:24.869307 master-0 kubenswrapper[7480]: I0308 21:58:24.868960 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:24.871301 master-0 kubenswrapper[7480]: I0308 21:58:24.871255 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-6lw8c" Mar 08 21:58:24.871469 master-0 kubenswrapper[7480]: I0308 21:58:24.871423 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrxj"] Mar 08 21:58:25.006298 master-0 kubenswrapper[7480]: I0308 21:58:25.006242 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-catalog-content\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.006540 master-0 kubenswrapper[7480]: I0308 21:58:25.006346 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-utilities\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.006540 master-0 kubenswrapper[7480]: I0308 21:58:25.006449 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmhq\" (UniqueName: \"kubernetes.io/projected/74d0aed3-8d57-472f-a48a-14ac41d6575f-kube-api-access-mfmhq\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.067823 master-0 kubenswrapper[7480]: I0308 21:58:25.067162 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz"] Mar 08 21:58:25.068562 master-0 kubenswrapper[7480]: I0308 21:58:25.068316 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.071591 master-0 kubenswrapper[7480]: I0308 21:58:25.071567 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 21:58:25.071788 master-0 kubenswrapper[7480]: I0308 21:58:25.071772 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 21:58:25.075739 master-0 kubenswrapper[7480]: I0308 21:58:25.075700 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-7kdzp" Mar 08 21:58:25.078240 master-0 kubenswrapper[7480]: I0308 21:58:25.078197 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 21:58:25.084748 master-0 kubenswrapper[7480]: I0308 21:58:25.084636 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 21:58:25.102350 master-0 kubenswrapper[7480]: I0308 21:58:25.102289 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz"] Mar 08 21:58:25.107992 master-0 kubenswrapper[7480]: I0308 21:58:25.107938 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-utilities\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.108133 master-0 kubenswrapper[7480]: I0308 21:58:25.108015 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmhq\" (UniqueName: \"kubernetes.io/projected/74d0aed3-8d57-472f-a48a-14ac41d6575f-kube-api-access-mfmhq\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.108133 master-0 kubenswrapper[7480]: I0308 21:58:25.108099 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-catalog-content\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.108640 master-0 kubenswrapper[7480]: I0308 21:58:25.108606 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-catalog-content\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.108918 master-0 kubenswrapper[7480]: I0308 21:58:25.108882 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-utilities\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.127650 master-0 kubenswrapper[7480]: I0308 21:58:25.127534 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmhq\" (UniqueName: \"kubernetes.io/projected/74d0aed3-8d57-472f-a48a-14ac41d6575f-kube-api-access-mfmhq\") pod \"redhat-marketplace-jcrxj\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.190422 master-0 kubenswrapper[7480]: I0308 21:58:25.190362 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:25.209332 master-0 kubenswrapper[7480]: I0308 21:58:25.209254 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkp4\" (UniqueName: \"kubernetes.io/projected/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-kube-api-access-dqkp4\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.209446 master-0 kubenswrapper[7480]: I0308 21:58:25.209359 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.209446 master-0 kubenswrapper[7480]: I0308 21:58:25.209415 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.311789 master-0 kubenswrapper[7480]: I0308 21:58:25.311722 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.311789 master-0 kubenswrapper[7480]: I0308 21:58:25.311802 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkp4\" (UniqueName: \"kubernetes.io/projected/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-kube-api-access-dqkp4\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.312126 master-0 kubenswrapper[7480]: I0308 21:58:25.311845 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.316106 master-0 kubenswrapper[7480]: I0308 21:58:25.313919 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.323098 master-0 kubenswrapper[7480]: I0308 21:58:25.317579 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.341101 master-0 kubenswrapper[7480]: I0308 21:58:25.336815 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkp4\" (UniqueName: \"kubernetes.io/projected/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-kube-api-access-dqkp4\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.406196 master-0 kubenswrapper[7480]: I0308 21:58:25.405464 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 21:58:25.539538 master-0 kubenswrapper[7480]: I0308 21:58:25.539473 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk"] Mar 08 21:58:25.540527 master-0 kubenswrapper[7480]: I0308 21:58:25.540490 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.544181 master-0 kubenswrapper[7480]: I0308 21:58:25.542888 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 21:58:25.544181 master-0 kubenswrapper[7480]: I0308 21:58:25.543144 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 21:58:25.544181 master-0 kubenswrapper[7480]: I0308 21:58:25.543717 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 21:58:25.545264 master-0 kubenswrapper[7480]: I0308 21:58:25.545235 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-tqhmq" Mar 08 21:58:25.567288 master-0 kubenswrapper[7480]: I0308 21:58:25.567220 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk"] Mar 08 21:58:25.618181 master-0 kubenswrapper[7480]: I0308 21:58:25.615730 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.618181 master-0 kubenswrapper[7480]: I0308 21:58:25.615866 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxssr\" (UniqueName: \"kubernetes.io/projected/fd9abe2b-f829-4376-9abe-7da0a08770e7-kube-api-access-vxssr\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.717602 master-0 kubenswrapper[7480]: I0308 21:58:25.717454 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.718057 master-0 kubenswrapper[7480]: I0308 21:58:25.718026 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxssr\" (UniqueName: \"kubernetes.io/projected/fd9abe2b-f829-4376-9abe-7da0a08770e7-kube-api-access-vxssr\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.721367 master-0 kubenswrapper[7480]: I0308 21:58:25.720790 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.746375 master-0 kubenswrapper[7480]: I0308 21:58:25.744305 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxssr\" (UniqueName: \"kubernetes.io/projected/fd9abe2b-f829-4376-9abe-7da0a08770e7-kube-api-access-vxssr\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:25.869011 master-0 kubenswrapper[7480]: I0308 21:58:25.868634 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 21:58:26.260003 master-0 kubenswrapper[7480]: I0308 21:58:26.259829 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8w7wm"] Mar 08 21:58:26.260943 master-0 kubenswrapper[7480]: I0308 21:58:26.260911 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.263055 master-0 kubenswrapper[7480]: I0308 21:58:26.263019 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-hlmng" Mar 08 21:58:26.278665 master-0 kubenswrapper[7480]: I0308 21:58:26.274491 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8w7wm"] Mar 08 21:58:26.326356 master-0 kubenswrapper[7480]: I0308 21:58:26.326300 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-utilities\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.326591 master-0 kubenswrapper[7480]: I0308 21:58:26.326367 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-catalog-content\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.326643 master-0 kubenswrapper[7480]: I0308 21:58:26.326563 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5t9m\" (UniqueName: \"kubernetes.io/projected/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-kube-api-access-w5t9m\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.427854 master-0 kubenswrapper[7480]: I0308 21:58:26.427785 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-utilities\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.427854 master-0 kubenswrapper[7480]: I0308 21:58:26.427845 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-catalog-content\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.428161 master-0 kubenswrapper[7480]: I0308 21:58:26.427907 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5t9m\" (UniqueName: \"kubernetes.io/projected/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-kube-api-access-w5t9m\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.430089 master-0 kubenswrapper[7480]: I0308 21:58:26.428694 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-catalog-content\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.437173 master-0 kubenswrapper[7480]: I0308 21:58:26.432382 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-utilities\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.447795 master-0 kubenswrapper[7480]: I0308 21:58:26.447742 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5t9m\" (UniqueName: \"kubernetes.io/projected/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-kube-api-access-w5t9m\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.579498 master-0 kubenswrapper[7480]: I0308 21:58:26.579428 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:26.756293 master-0 kubenswrapper[7480]: I0308 21:58:26.751535 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm"] Mar 08 21:58:26.756293 master-0 kubenswrapper[7480]: I0308 21:58:26.756178 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.765554 master-0 kubenswrapper[7480]: I0308 21:58:26.759680 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 21:58:26.765554 master-0 kubenswrapper[7480]: I0308 21:58:26.760581 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 21:58:26.765554 master-0 kubenswrapper[7480]: I0308 21:58:26.760887 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-b4pnr" Mar 08 21:58:26.765554 master-0 kubenswrapper[7480]: I0308 21:58:26.761201 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 21:58:26.765554 master-0 kubenswrapper[7480]: I0308 21:58:26.761423 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 21:58:26.781660 master-0 kubenswrapper[7480]: I0308 21:58:26.779948 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm"] Mar 08 21:58:26.836605 master-0 kubenswrapper[7480]: I0308 21:58:26.836485 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.836605 master-0 kubenswrapper[7480]: I0308 21:58:26.836575 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.836605 master-0 kubenswrapper[7480]: I0308 21:58:26.836610 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqrj\" (UniqueName: \"kubernetes.io/projected/d9e9c931-9595-42f1-bbc2-c412286f6cd1-kube-api-access-znqrj\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.836605 master-0 kubenswrapper[7480]: I0308 21:58:26.836649 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.837234 master-0 kubenswrapper[7480]: I0308 21:58:26.836677 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.939341 master-0 kubenswrapper[7480]: I0308 21:58:26.937880 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.939341 master-0 kubenswrapper[7480]: I0308 21:58:26.937941 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqrj\" (UniqueName: \"kubernetes.io/projected/d9e9c931-9595-42f1-bbc2-c412286f6cd1-kube-api-access-znqrj\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.939341 master-0 kubenswrapper[7480]: I0308 21:58:26.938580 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.939341 master-0 kubenswrapper[7480]: I0308 21:58:26.938670 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.939341 master-0 kubenswrapper[7480]: I0308 21:58:26.938772 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.942128 master-0 kubenswrapper[7480]: I0308 21:58:26.940666 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.942128 master-0 kubenswrapper[7480]: I0308 21:58:26.941416 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.954502 master-0 kubenswrapper[7480]: I0308 21:58:26.943306 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.954502 master-0 kubenswrapper[7480]: I0308 21:58:26.943765 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:26.959396 master-0 kubenswrapper[7480]: I0308 21:58:26.959315 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqrj\" (UniqueName: \"kubernetes.io/projected/d9e9c931-9595-42f1-bbc2-c412286f6cd1-kube-api-access-znqrj\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:27.002950 master-0 kubenswrapper[7480]: I0308 21:58:27.002916 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk"] Mar 08 21:58:27.029963 master-0 kubenswrapper[7480]: I0308 21:58:27.029911 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 08 21:58:27.052549 master-0 kubenswrapper[7480]: I0308 21:58:27.051422 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.056280 master-0 kubenswrapper[7480]: I0308 21:58:27.053871 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 08 21:58:27.058091 master-0 kubenswrapper[7480]: I0308 21:58:27.058031 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v7cvh" Mar 08 21:58:27.087065 master-0 kubenswrapper[7480]: I0308 21:58:27.086686 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 21:58:27.125889 master-0 kubenswrapper[7480]: I0308 21:58:27.125789 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrxj"] Mar 08 21:58:27.144323 master-0 kubenswrapper[7480]: I0308 21:58:27.141461 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78dc543f-66ed-4098-b5a9-699ec2ccc856-kube-api-access\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.144323 master-0 kubenswrapper[7480]: I0308 21:58:27.141539 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.144323 master-0 kubenswrapper[7480]: I0308 21:58:27.141639 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-var-lock\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.169191 master-0 kubenswrapper[7480]: I0308 21:58:27.168263 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg"] Mar 08 21:58:27.169682 master-0 kubenswrapper[7480]: I0308 21:58:27.169620 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.189361 master-0 kubenswrapper[7480]: I0308 21:58:27.189292 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-g8h2t" Mar 08 21:58:27.189712 master-0 kubenswrapper[7480]: I0308 21:58:27.189668 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 21:58:27.190188 master-0 kubenswrapper[7480]: I0308 21:58:27.190158 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 21:58:27.218193 master-0 kubenswrapper[7480]: W0308 21:58:27.215413 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f9399bc_ac2a_4eb3_b1a0_dd523e5a97c8.slice/crio-9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79 WatchSource:0}: Error finding container 9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79: Status 404 returned error can't find the container with id 9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79 Mar 08 21:58:27.226499 master-0 kubenswrapper[7480]: I0308 21:58:27.225677 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg"] Mar 08 21:58:27.231279 master-0 kubenswrapper[7480]: I0308 21:58:27.231249 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz"] Mar 08 21:58:27.243801 master-0 kubenswrapper[7480]: I0308 21:58:27.243740 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmk7\" (UniqueName: \"kubernetes.io/projected/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-kube-api-access-nvmk7\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.243920 master-0 kubenswrapper[7480]: I0308 21:58:27.243825 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-var-lock\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.243920 master-0 kubenswrapper[7480]: I0308 21:58:27.243858 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78dc543f-66ed-4098-b5a9-699ec2ccc856-kube-api-access\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.243920 master-0 kubenswrapper[7480]: I0308 21:58:27.243881 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.243920 master-0 kubenswrapper[7480]: I0308 21:58:27.243906 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.244126 master-0 kubenswrapper[7480]: I0308 21:58:27.243928 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.244126 master-0 kubenswrapper[7480]: I0308 21:58:27.244039 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-var-lock\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.251509 master-0 kubenswrapper[7480]: I0308 21:58:27.244444 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.265448 master-0 kubenswrapper[7480]: W0308 21:58:27.265401 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088eecd9_a153_4fe0_af5a_78f5bdc0eb6b.slice/crio-46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d WatchSource:0}: Error finding container 46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d: Status 404 returned error can't find the container with id 46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d Mar 08 21:58:27.273003 master-0 kubenswrapper[7480]: I0308 21:58:27.271121 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78dc543f-66ed-4098-b5a9-699ec2ccc856-kube-api-access\") pod \"installer-2-master-0\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.338291 master-0 kubenswrapper[7480]: I0308 21:58:27.337880 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8w7wm"] Mar 08 21:58:27.345305 master-0 kubenswrapper[7480]: I0308 21:58:27.344797 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmk7\" (UniqueName: \"kubernetes.io/projected/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-kube-api-access-nvmk7\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.345305 master-0 kubenswrapper[7480]: I0308 21:58:27.344897 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.345305 master-0 kubenswrapper[7480]: I0308 21:58:27.344937 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.349278 master-0 kubenswrapper[7480]: I0308 21:58:27.349224 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.355373 master-0 kubenswrapper[7480]: I0308 21:58:27.355318 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.373951 master-0 kubenswrapper[7480]: I0308 21:58:27.373921 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmk7\" (UniqueName: \"kubernetes.io/projected/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-kube-api-access-nvmk7\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.385032 master-0 kubenswrapper[7480]: I0308 21:58:27.381588 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 21:58:27.452623 master-0 kubenswrapper[7480]: I0308 21:58:27.450353 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8k5md"] Mar 08 21:58:27.527334 master-0 kubenswrapper[7480]: I0308 21:58:27.526395 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:58:27.617526 master-0 kubenswrapper[7480]: I0308 21:58:27.617460 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm"] Mar 08 21:58:27.648429 master-0 kubenswrapper[7480]: I0308 21:58:27.645551 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-fn4ck"] Mar 08 21:58:27.648429 master-0 kubenswrapper[7480]: I0308 21:58:27.646673 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.648429 master-0 kubenswrapper[7480]: I0308 21:58:27.648415 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lvhnl" Mar 08 21:58:27.653846 master-0 kubenswrapper[7480]: I0308 21:58:27.650448 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 21:58:27.653846 master-0 kubenswrapper[7480]: I0308 21:58:27.650623 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 21:58:27.653846 master-0 kubenswrapper[7480]: I0308 21:58:27.650720 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 21:58:27.653846 master-0 kubenswrapper[7480]: I0308 21:58:27.651605 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 21:58:27.665974 master-0 kubenswrapper[7480]: I0308 21:58:27.664376 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 21:58:27.667791 master-0 kubenswrapper[7480]: I0308 21:58:27.667730 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-fn4ck"] Mar 08 21:58:27.704054 master-0 kubenswrapper[7480]: W0308 21:58:27.703765 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9e9c931_9595_42f1_bbc2_c412286f6cd1.slice/crio-3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19 WatchSource:0}: Error finding container 3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19: Status 404 returned error can't find the container with id 3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19 Mar 08 21:58:27.751486 master-0 kubenswrapper[7480]: I0308 21:58:27.750709 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6"] Mar 08 21:58:27.752101 master-0 kubenswrapper[7480]: I0308 21:58:27.751715 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:58:27.757205 master-0 kubenswrapper[7480]: I0308 21:58:27.757164 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6"] Mar 08 21:58:27.758698 master-0 kubenswrapper[7480]: I0308 21:58:27.757309 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c5hcb" Mar 08 21:58:27.758698 master-0 kubenswrapper[7480]: I0308 21:58:27.757354 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 21:58:27.762771 master-0 kubenswrapper[7480]: I0308 21:58:27.762208 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"294cff59d7c8d4cc43ab7857ed109621d4b5b6fd360227fbee62b81817851711"} Mar 08 21:58:27.762771 master-0 kubenswrapper[7480]: I0308 21:58:27.762287 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79"} Mar 08 21:58:27.763317 master-0 kubenswrapper[7480]: I0308 21:58:27.763298 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.763442 master-0 kubenswrapper[7480]: I0308 21:58:27.763429 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.763565 master-0 kubenswrapper[7480]: I0308 21:58:27.763552 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.763697 master-0 kubenswrapper[7480]: I0308 21:58:27.763684 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrqj\" (UniqueName: \"kubernetes.io/projected/66e50eed-e3ac-431f-931b-7c4c848c491b-kube-api-access-bjrqj\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.763784 master-0 kubenswrapper[7480]: I0308 21:58:27.763770 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/66e50eed-e3ac-431f-931b-7c4c848c491b-snapshots\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.764617 master-0 kubenswrapper[7480]: I0308 21:58:27.764160 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" event={"ID":"fd9abe2b-f829-4376-9abe-7da0a08770e7","Type":"ContainerStarted","Data":"f08d60c032a49069a33366a771add75613c8b164c10de5edc94cf407f1fce2c7"} Mar 08 21:58:27.772804 master-0 kubenswrapper[7480]: I0308 21:58:27.772712 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerStarted","Data":"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7"} Mar 08 21:58:27.795945 master-0 kubenswrapper[7480]: I0308 21:58:27.795706 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_65148321-8caf-4e9c-80cc-ced8e2a8ac03/installer/0.log" Mar 08 21:58:27.796403 master-0 kubenswrapper[7480]: I0308 21:58:27.796380 7480 generic.go:334] "Generic (PLEG): container finished" podID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" containerID="00da65f85d6a396bd144d8af9fedcda14ea9c9016de2176d13648b00d0ef6d29" exitCode=1 Mar 08 21:58:27.799850 master-0 kubenswrapper[7480]: I0308 21:58:27.799804 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19"} Mar 08 21:58:27.799850 master-0 kubenswrapper[7480]: I0308 21:58:27.799847 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"65148321-8caf-4e9c-80cc-ced8e2a8ac03","Type":"ContainerDied","Data":"00da65f85d6a396bd144d8af9fedcda14ea9c9016de2176d13648b00d0ef6d29"} Mar 08 21:58:27.802920 master-0 kubenswrapper[7480]: I0308 21:58:27.802882 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerStarted","Data":"8f7cb4c1d4399f77a4bee9272b7411e3d08f666e05ff23bad71da9a5b93158e4"} Mar 08 21:58:27.807348 master-0 kubenswrapper[7480]: I0308 21:58:27.807283 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" podStartSLOduration=2.41710742 podStartE2EDuration="5.807268041s" podCreationTimestamp="2026-03-08 21:58:22 +0000 UTC" firstStartedPulling="2026-03-08 21:58:23.226629659 +0000 UTC m=+53.680250261" lastFinishedPulling="2026-03-08 21:58:26.61679028 +0000 UTC m=+57.070410882" observedRunningTime="2026-03-08 21:58:27.805938066 +0000 UTC m=+58.259558678" watchObservedRunningTime="2026-03-08 21:58:27.807268041 +0000 UTC m=+58.260888643" Mar 08 21:58:27.818768 master-0 kubenswrapper[7480]: I0308 21:58:27.816157 7480 generic.go:334] "Generic (PLEG): container finished" podID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerID="6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c" exitCode=0 Mar 08 21:58:27.818768 master-0 kubenswrapper[7480]: I0308 21:58:27.816287 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrxj" event={"ID":"74d0aed3-8d57-472f-a48a-14ac41d6575f","Type":"ContainerDied","Data":"6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c"} Mar 08 21:58:27.818768 master-0 kubenswrapper[7480]: I0308 21:58:27.816331 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrxj" event={"ID":"74d0aed3-8d57-472f-a48a-14ac41d6575f","Type":"ContainerStarted","Data":"dbd0502e9633a163b882da4e059fc58d1cb8c50d2d7c3ae85f65ae7cfc636b5a"} Mar 08 21:58:27.831760 master-0 kubenswrapper[7480]: I0308 21:58:27.831706 7480 generic.go:334] "Generic (PLEG): container finished" podID="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" containerID="17354f9a78986dd3c8de787a809b49886d6ee3c4cad78116a2e66e3dae4db975" exitCode=0 Mar 08 21:58:27.831760 master-0 kubenswrapper[7480]: I0308 21:58:27.831761 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerDied","Data":"17354f9a78986dd3c8de787a809b49886d6ee3c4cad78116a2e66e3dae4db975"} Mar 08 21:58:27.831991 master-0 kubenswrapper[7480]: I0308 21:58:27.831791 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerStarted","Data":"46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d"} Mar 08 21:58:27.855351 master-0 kubenswrapper[7480]: I0308 21:58:27.855285 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" podStartSLOduration=5.551325169 podStartE2EDuration="8.855259269s" podCreationTimestamp="2026-03-08 21:58:19 +0000 UTC" firstStartedPulling="2026-03-08 21:58:23.318100806 +0000 UTC m=+53.771721408" lastFinishedPulling="2026-03-08 21:58:26.622034906 +0000 UTC m=+57.075655508" observedRunningTime="2026-03-08 21:58:27.852263181 +0000 UTC m=+58.305883783" watchObservedRunningTime="2026-03-08 21:58:27.855259269 +0000 UTC m=+58.308879871" Mar 08 21:58:27.867116 master-0 kubenswrapper[7480]: I0308 21:58:27.866461 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47cmq"] Mar 08 21:58:27.877121 master-0 kubenswrapper[7480]: I0308 21:58:27.871287 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:27.877121 master-0 kubenswrapper[7480]: I0308 21:58:27.874361 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lmwn6" Mar 08 21:58:27.877998 master-0 kubenswrapper[7480]: I0308 21:58:27.877952 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:58:27.878108 master-0 kubenswrapper[7480]: I0308 21:58:27.878057 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.878155 master-0 kubenswrapper[7480]: I0308 21:58:27.878125 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:58:27.878668 master-0 kubenswrapper[7480]: I0308 21:58:27.878432 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrqj\" (UniqueName: \"kubernetes.io/projected/66e50eed-e3ac-431f-931b-7c4c848c491b-kube-api-access-bjrqj\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.878668 master-0 kubenswrapper[7480]: I0308 21:58:27.878554 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/66e50eed-e3ac-431f-931b-7c4c848c491b-snapshots\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.878668 master-0 kubenswrapper[7480]: I0308 21:58:27.878626 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.878668 master-0 kubenswrapper[7480]: I0308 21:58:27.878664 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.889054 master-0 kubenswrapper[7480]: I0308 21:58:27.884634 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.889054 master-0 kubenswrapper[7480]: I0308 21:58:27.885048 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/66e50eed-e3ac-431f-931b-7c4c848c491b-snapshots\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.889054 master-0 kubenswrapper[7480]: I0308 21:58:27.886747 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.889054 master-0 kubenswrapper[7480]: I0308 21:58:27.888646 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.891033 master-0 kubenswrapper[7480]: I0308 21:58:27.890877 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47cmq"] Mar 08 21:58:27.925212 master-0 kubenswrapper[7480]: I0308 21:58:27.925003 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 08 21:58:27.929136 master-0 kubenswrapper[7480]: I0308 21:58:27.928812 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrqj\" (UniqueName: \"kubernetes.io/projected/66e50eed-e3ac-431f-931b-7c4c848c491b-kube-api-access-bjrqj\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:27.968882 master-0 kubenswrapper[7480]: I0308 21:58:27.968805 7480 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 21:58:27.969791 master-0 kubenswrapper[7480]: I0308 21:58:27.969429 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://f40be1d4a754000339d3870a29f35b23044b2b81588631c57cf192ab4e70d6fd" gracePeriod=30 Mar 08 21:58:27.971543 master-0 kubenswrapper[7480]: I0308 21:58:27.969379 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://ca95d22d6228d434ce4ed2f415b15a00e7effc076e30de148f0569774a6d01db" gracePeriod=30 Mar 08 21:58:27.972015 master-0 kubenswrapper[7480]: I0308 21:58:27.971966 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: E0308 21:58:27.972248 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.972262 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: E0308 21:58:27.972273 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.972279 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.972405 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.972422 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.975398 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.986052 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-catalog-content\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.986153 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.986230 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.986272 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-utilities\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:27.988161 master-0 kubenswrapper[7480]: I0308 21:58:27.986361 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:28.005452 master-0 kubenswrapper[7480]: I0308 21:58:28.005183 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:58:28.034164 master-0 kubenswrapper[7480]: I0308 21:58:28.034101 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:58:28.078775 master-0 kubenswrapper[7480]: I0308 21:58:28.078728 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_65148321-8caf-4e9c-80cc-ced8e2a8ac03/installer/0.log" Mar 08 21:58:28.078954 master-0 kubenswrapper[7480]: I0308 21:58:28.078810 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:58:28.088220 master-0 kubenswrapper[7480]: I0308 21:58:28.088171 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-catalog-content\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:28.088355 master-0 kubenswrapper[7480]: I0308 21:58:28.088230 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.088355 master-0 kubenswrapper[7480]: I0308 21:58:28.088254 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.088355 master-0 kubenswrapper[7480]: I0308 21:58:28.088283 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.088355 master-0 kubenswrapper[7480]: I0308 21:58:28.088315 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.088355 master-0 kubenswrapper[7480]: I0308 21:58:28.088347 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.088541 master-0 kubenswrapper[7480]: I0308 21:58:28.088373 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-utilities\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:28.088541 master-0 kubenswrapper[7480]: I0308 21:58:28.088408 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:28.088541 master-0 kubenswrapper[7480]: I0308 21:58:28.088441 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.088969 master-0 kubenswrapper[7480]: I0308 21:58:28.088940 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-catalog-content\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:28.089276 master-0 kubenswrapper[7480]: I0308 21:58:28.089257 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-utilities\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:58:28.189429 master-0 kubenswrapper[7480]: I0308 21:58:28.189382 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-var-lock\") pod \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " Mar 08 21:58:28.189673 master-0 kubenswrapper[7480]: I0308 21:58:28.189564 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kubelet-dir\") pod \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " Mar 08 21:58:28.189673 master-0 kubenswrapper[7480]: I0308 21:58:28.189626 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kube-api-access\") pod \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\" (UID: \"65148321-8caf-4e9c-80cc-ced8e2a8ac03\") " Mar 08 21:58:28.189833 master-0 kubenswrapper[7480]: I0308 21:58:28.189809 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.189881 master-0 kubenswrapper[7480]: I0308 21:58:28.189837 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.189881 master-0 kubenswrapper[7480]: I0308 21:58:28.189859 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.189881 master-0 kubenswrapper[7480]: I0308 21:58:28.189881 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.189992 master-0 kubenswrapper[7480]: I0308 21:58:28.189902 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.189992 master-0 kubenswrapper[7480]: I0308 21:58:28.189977 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.190370 master-0 kubenswrapper[7480]: I0308 21:58:28.190343 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.190573 master-0 kubenswrapper[7480]: I0308 21:58:28.190524 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-var-lock" (OuterVolumeSpecName: "var-lock") pod "65148321-8caf-4e9c-80cc-ced8e2a8ac03" (UID: "65148321-8caf-4e9c-80cc-ced8e2a8ac03"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:28.190902 master-0 kubenswrapper[7480]: I0308 21:58:28.190862 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "65148321-8caf-4e9c-80cc-ced8e2a8ac03" (UID: "65148321-8caf-4e9c-80cc-ced8e2a8ac03"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:28.191701 master-0 kubenswrapper[7480]: I0308 21:58:28.191635 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.191701 master-0 kubenswrapper[7480]: I0308 21:58:28.191675 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.191815 master-0 kubenswrapper[7480]: I0308 21:58:28.191702 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.191815 master-0 kubenswrapper[7480]: I0308 21:58:28.191730 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.191815 master-0 kubenswrapper[7480]: I0308 21:58:28.191757 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 21:58:28.195236 master-0 kubenswrapper[7480]: I0308 21:58:28.195197 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "65148321-8caf-4e9c-80cc-ced8e2a8ac03" (UID: "65148321-8caf-4e9c-80cc-ced8e2a8ac03"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:28.291584 master-0 kubenswrapper[7480]: I0308 21:58:28.291533 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:28.291584 master-0 kubenswrapper[7480]: I0308 21:58:28.291579 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:28.291584 master-0 kubenswrapper[7480]: I0308 21:58:28.291593 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65148321-8caf-4e9c-80cc-ced8e2a8ac03-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:28.840366 master-0 kubenswrapper[7480]: I0308 21:58:28.840312 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_65148321-8caf-4e9c-80cc-ced8e2a8ac03/installer/0.log" Mar 08 21:58:28.842057 master-0 kubenswrapper[7480]: I0308 21:58:28.840448 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"65148321-8caf-4e9c-80cc-ced8e2a8ac03","Type":"ContainerDied","Data":"454aa3f28a441e0884b9b6514f179a846a609d67518a83cc9ce725de23e88a51"} Mar 08 21:58:28.842057 master-0 kubenswrapper[7480]: I0308 21:58:28.840485 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 21:58:28.842057 master-0 kubenswrapper[7480]: I0308 21:58:28.840514 7480 scope.go:117] "RemoveContainer" containerID="00da65f85d6a396bd144d8af9fedcda14ea9c9016de2176d13648b00d0ef6d29" Mar 08 21:58:28.842517 master-0 kubenswrapper[7480]: I0308 21:58:28.842224 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"78dc543f-66ed-4098-b5a9-699ec2ccc856","Type":"ContainerStarted","Data":"8885706fe3eb5e1a7daf09d862d9ef81922973f55e3d7589baf732cdce1cb547"} Mar 08 21:58:29.853579 master-0 kubenswrapper[7480]: I0308 21:58:29.853512 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"78dc543f-66ed-4098-b5a9-699ec2ccc856","Type":"ContainerStarted","Data":"b72861ea5791b8527c79a3ba9ca252aad4949d7fe333b8f4afa8d681aa68f9d1"} Mar 08 21:58:31.867435 master-0 kubenswrapper[7480]: I0308 21:58:31.867243 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" event={"ID":"fd9abe2b-f829-4376-9abe-7da0a08770e7","Type":"ContainerStarted","Data":"a081eaa1fe28cb625de6cbd34bf82fe380f1125f6fc13709be875ffb66e10712"} Mar 08 21:58:31.867435 master-0 kubenswrapper[7480]: I0308 21:58:31.867296 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" event={"ID":"fd9abe2b-f829-4376-9abe-7da0a08770e7","Type":"ContainerStarted","Data":"3d5f85e25df37bc23b86ad59b79c59dee68778a01ef1c8a85a90f6ca1894bc34"} Mar 08 21:58:31.871773 master-0 kubenswrapper[7480]: I0308 21:58:31.871138 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"93b37166e7a76abfca6ddb5300495d48bbcbeedf6828ba2c36f322ef2fec8592"} Mar 08 21:58:31.871773 master-0 kubenswrapper[7480]: I0308 21:58:31.871197 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"6edcb8198a1dd9b552f9d5577953c53700190a2b87b4307329abfdbc057033f6"} Mar 08 21:58:41.026104 master-0 kubenswrapper[7480]: E0308 21:58:41.025973 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 21:58:41.027150 master-0 kubenswrapper[7480]: I0308 21:58:41.026537 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 21:58:41.607154 master-0 kubenswrapper[7480]: E0308 21:58:41.606892 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:58:42.155114 master-0 kubenswrapper[7480]: E0308 21:58:42.155025 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:58:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:58:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:58:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:58:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:58:42.774017 master-0 kubenswrapper[7480]: E0308 21:58:42.773870 7480 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1a7900_a0b2_47fc_b43c_a0a5dee6b657.slice/crio-7f5daa2de1f6df01131ca1902e342e04bb1b827c174c05c0eca0fbaf2e99d63e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-conmon-a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d01185_e485_4697_92c2_31a044f25d82.slice/crio-conmon-976b58fc1120e6fabea3f3e742b338197d6177744cd2eaa08b2fbcbf40997975.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4382d186_34e4_40af_9b92_bb17ddcaa23f.slice/crio-08565425081deee92c7687162594caf9377c56e50d0f8c15ad4e7a1783f348a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0641333_feda_44c5_baf5_ceee4ce3fd8f.slice/crio-conmon-5751dfe5fd1540121098ed40ec13958c4c24971f6926ffb1a819efce4539e20b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb849f992_1020_4633_98be_75705b962fa9.slice/crio-conmon-8c3b4ceb41e704efa5c6310b7c7f2f9b2d1143f2c6b51bbd8b89428ef29903d5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37bf82cb_adea_46d3_a899_136eb1d1f292.slice/crio-conmon-0cd82806e32f5aff3882b12920f789736d13e0c10e69c1b8897b987b79b257f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e00c74_fb72_4e3d_a22c_c38a4772a813.slice/crio-conmon-d64e1995a40f2b6bea5cb4fa3fec2bd3a410ce12876f506350307307d5f98025.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb849f992_1020_4633_98be_75705b962fa9.slice/crio-8c3b4ceb41e704efa5c6310b7c7f2f9b2d1143f2c6b51bbd8b89428ef29903d5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fbc12f_3c27_4a7a_933f_43a55c960335.slice/crio-conmon-203d5ffd6b42767986b0c00fad6bf5e37cd85c80fbadcc44e7962f2db5d381a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d851f97_b21e_432e_a4c3_dc0a8ff00e84.slice/crio-e5e59c15849212680188c2c1c82809a383667ed9f6ba095936c93afd76943525.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod8a9c4d25_8230_4111_b1ad_fd6427c16488.slice/crio-ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod8a9c4d25_8230_4111_b1ad_fd6427c16488.slice/crio-conmon-ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0641333_feda_44c5_baf5_ceee4ce3fd8f.slice/crio-5751dfe5fd1540121098ed40ec13958c4c24971f6926ffb1a819efce4539e20b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971ffa86_4d52_4dc3_ba28_03d116ec3494.slice/crio-bde48ef8ba183c4b97bbaad5f4d2d7f67d2b58f718bad639e2b804c056dd9fd8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971ffa86_4d52_4dc3_ba28_03d116ec3494.slice/crio-conmon-bde48ef8ba183c4b97bbaad5f4d2d7f67d2b58f718bad639e2b804c056dd9fd8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fbc12f_3c27_4a7a_933f_43a55c960335.slice/crio-203d5ffd6b42767986b0c00fad6bf5e37cd85c80fbadcc44e7962f2db5d381a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37bf82cb_adea_46d3_a899_136eb1d1f292.slice/crio-0cd82806e32f5aff3882b12920f789736d13e0c10e69c1b8897b987b79b257f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod57a34dbc_eb6d_44f5_b1aa_4762b69382ed.slice/crio-conmon-11d598a821a501bbacbf414ba9cb9b4053b94492a8ef82c31d41892148ed5df2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e00c74_fb72_4e3d_a22c_c38a4772a813.slice/crio-d64e1995a40f2b6bea5cb4fa3fec2bd3a410ce12876f506350307307d5f98025.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d851f97_b21e_432e_a4c3_dc0a8ff00e84.slice/crio-conmon-e5e59c15849212680188c2c1c82809a383667ed9f6ba095936c93afd76943525.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4382d186_34e4_40af_9b92_bb17ddcaa23f.slice/crio-conmon-08565425081deee92c7687162594caf9377c56e50d0f8c15ad4e7a1783f348a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d01185_e485_4697_92c2_31a044f25d82.slice/crio-976b58fc1120e6fabea3f3e742b338197d6177744cd2eaa08b2fbcbf40997975.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1a7900_a0b2_47fc_b43c_a0a5dee6b657.slice/crio-conmon-7f5daa2de1f6df01131ca1902e342e04bb1b827c174c05c0eca0fbaf2e99d63e.scope\": RecentStats: unable to find data in memory cache]" Mar 08 21:58:43.049783 master-0 kubenswrapper[7480]: I0308 21:58:43.049605 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 21:58:43.974160 master-0 kubenswrapper[7480]: I0308 21:58:43.974029 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 21:58:48.806469 master-0 kubenswrapper[7480]: W0308 21:58:48.806399 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-6c62a387831366e46750d8eb479bd6368b8cd61b2c5cb43730138aecabb49c39 WatchSource:0}: Error finding container 6c62a387831366e46750d8eb479bd6368b8cd61b2c5cb43730138aecabb49c39: Status 404 returned error can't find the container with id 6c62a387831366e46750d8eb479bd6368b8cd61b2c5cb43730138aecabb49c39 Mar 08 21:58:48.972360 master-0 kubenswrapper[7480]: I0308 21:58:48.972284 7480 generic.go:334] "Generic (PLEG): container finished" podID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerID="11d598a821a501bbacbf414ba9cb9b4053b94492a8ef82c31d41892148ed5df2" exitCode=0 Mar 08 21:58:48.972616 master-0 kubenswrapper[7480]: I0308 21:58:48.972381 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"57a34dbc-eb6d-44f5-b1aa-4762b69382ed","Type":"ContainerDied","Data":"11d598a821a501bbacbf414ba9cb9b4053b94492a8ef82c31d41892148ed5df2"} Mar 08 21:58:48.975442 master-0 kubenswrapper[7480]: I0308 21:58:48.975399 7480 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="f50874fd44a38fe2052c0dd021aa5c5eab2b987367eeee5b46f35dae49f0f668" exitCode=1 Mar 08 21:58:48.975522 master-0 kubenswrapper[7480]: I0308 21:58:48.975454 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"f50874fd44a38fe2052c0dd021aa5c5eab2b987367eeee5b46f35dae49f0f668"} Mar 08 21:58:48.976561 master-0 kubenswrapper[7480]: I0308 21:58:48.976533 7480 scope.go:117] "RemoveContainer" containerID="f50874fd44a38fe2052c0dd021aa5c5eab2b987367eeee5b46f35dae49f0f668" Mar 08 21:58:48.985958 master-0 kubenswrapper[7480]: I0308 21:58:48.985894 7480 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331" exitCode=1 Mar 08 21:58:48.986049 master-0 kubenswrapper[7480]: I0308 21:58:48.986019 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331"} Mar 08 21:58:48.986753 master-0 kubenswrapper[7480]: I0308 21:58:48.986720 7480 scope.go:117] "RemoveContainer" containerID="a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331" Mar 08 21:58:49.000104 master-0 kubenswrapper[7480]: I0308 21:58:48.999476 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_8a9c4d25-8230-4111-b1ad-fd6427c16488/installer/0.log" Mar 08 21:58:49.000104 master-0 kubenswrapper[7480]: I0308 21:58:48.999540 7480 generic.go:334] "Generic (PLEG): container finished" podID="8a9c4d25-8230-4111-b1ad-fd6427c16488" containerID="ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad" exitCode=1 Mar 08 21:58:49.000104 master-0 kubenswrapper[7480]: I0308 21:58:48.999655 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8a9c4d25-8230-4111-b1ad-fd6427c16488","Type":"ContainerDied","Data":"ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad"} Mar 08 21:58:49.004279 master-0 kubenswrapper[7480]: I0308 21:58:49.001877 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"6c62a387831366e46750d8eb479bd6368b8cd61b2c5cb43730138aecabb49c39"} Mar 08 21:58:49.031009 master-0 kubenswrapper[7480]: I0308 21:58:49.030754 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:58:49.125794 master-0 kubenswrapper[7480]: I0308 21:58:49.125745 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_8a9c4d25-8230-4111-b1ad-fd6427c16488/installer/0.log" Mar 08 21:58:49.125906 master-0 kubenswrapper[7480]: I0308 21:58:49.125850 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:49.285425 master-0 kubenswrapper[7480]: I0308 21:58:49.285381 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-kubelet-dir\") pod \"8a9c4d25-8230-4111-b1ad-fd6427c16488\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " Mar 08 21:58:49.285571 master-0 kubenswrapper[7480]: I0308 21:58:49.285525 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a9c4d25-8230-4111-b1ad-fd6427c16488-kube-api-access\") pod \"8a9c4d25-8230-4111-b1ad-fd6427c16488\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " Mar 08 21:58:49.285571 master-0 kubenswrapper[7480]: I0308 21:58:49.285534 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8a9c4d25-8230-4111-b1ad-fd6427c16488" (UID: "8a9c4d25-8230-4111-b1ad-fd6427c16488"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:49.285653 master-0 kubenswrapper[7480]: I0308 21:58:49.285631 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-var-lock\") pod \"8a9c4d25-8230-4111-b1ad-fd6427c16488\" (UID: \"8a9c4d25-8230-4111-b1ad-fd6427c16488\") " Mar 08 21:58:49.285753 master-0 kubenswrapper[7480]: I0308 21:58:49.285728 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-var-lock" (OuterVolumeSpecName: "var-lock") pod "8a9c4d25-8230-4111-b1ad-fd6427c16488" (UID: "8a9c4d25-8230-4111-b1ad-fd6427c16488"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:49.288834 master-0 kubenswrapper[7480]: I0308 21:58:49.285967 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:49.288834 master-0 kubenswrapper[7480]: I0308 21:58:49.286007 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a9c4d25-8230-4111-b1ad-fd6427c16488-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:49.293229 master-0 kubenswrapper[7480]: I0308 21:58:49.291216 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9c4d25-8230-4111-b1ad-fd6427c16488-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8a9c4d25-8230-4111-b1ad-fd6427c16488" (UID: "8a9c4d25-8230-4111-b1ad-fd6427c16488"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:49.386843 master-0 kubenswrapper[7480]: I0308 21:58:49.386775 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a9c4d25-8230-4111-b1ad-fd6427c16488-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:50.010313 master-0 kubenswrapper[7480]: I0308 21:58:50.010204 7480 generic.go:334] "Generic (PLEG): container finished" podID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerID="90b4f897f8f9b9eba77267fc234acf3af0daac8bfb7169a47286a11ecb3c5e01" exitCode=0 Mar 08 21:58:50.010313 master-0 kubenswrapper[7480]: I0308 21:58:50.010269 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k5md" event={"ID":"18d5d11d-3d01-448f-b34e-55ebc772f5e8","Type":"ContainerDied","Data":"90b4f897f8f9b9eba77267fc234acf3af0daac8bfb7169a47286a11ecb3c5e01"} Mar 08 21:58:50.017351 master-0 kubenswrapper[7480]: I0308 21:58:50.017280 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_8a9c4d25-8230-4111-b1ad-fd6427c16488/installer/0.log" Mar 08 21:58:50.017559 master-0 kubenswrapper[7480]: I0308 21:58:50.017412 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8a9c4d25-8230-4111-b1ad-fd6427c16488","Type":"ContainerDied","Data":"a70da3d7e0f56ee98fe1de17a4ecc7f84ec0445b52ed29de54a5f11f2f33237d"} Mar 08 21:58:50.017559 master-0 kubenswrapper[7480]: I0308 21:58:50.017550 7480 scope.go:117] "RemoveContainer" containerID="ed03c19f3cd282d9dc8aba54e8beb63ed0e914d6163152f2611419e70c3ad5ad" Mar 08 21:58:50.017873 master-0 kubenswrapper[7480]: I0308 21:58:50.017820 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 21:58:50.027717 master-0 kubenswrapper[7480]: I0308 21:58:50.027658 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb"} Mar 08 21:58:50.033326 master-0 kubenswrapper[7480]: I0308 21:58:50.033262 7480 generic.go:334] "Generic (PLEG): container finished" podID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerID="4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7" exitCode=0 Mar 08 21:58:50.048455 master-0 kubenswrapper[7480]: I0308 21:58:50.033373 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7p5f" event={"ID":"5857b3d0-0865-4ffd-bcc9-3c139c575209","Type":"ContainerDied","Data":"4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7"} Mar 08 21:58:50.048455 master-0 kubenswrapper[7480]: I0308 21:58:50.038445 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08" exitCode=0 Mar 08 21:58:50.048455 master-0 kubenswrapper[7480]: I0308 21:58:50.038539 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08"} Mar 08 21:58:50.048455 master-0 kubenswrapper[7480]: I0308 21:58:50.044225 7480 generic.go:334] "Generic (PLEG): container finished" podID="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" containerID="1e3bba86fc611770354755d87c02e967df54a626a16a1218a0b91a1d1f5b23e2" exitCode=0 Mar 08 21:58:50.048455 master-0 kubenswrapper[7480]: I0308 21:58:50.044305 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerDied","Data":"1e3bba86fc611770354755d87c02e967df54a626a16a1218a0b91a1d1f5b23e2"} Mar 08 21:58:50.050598 master-0 kubenswrapper[7480]: I0308 21:58:50.050535 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"566f64e1e5f69c2bf95c8075567ff0feb7dd0877a1f2fce23e6ae2446c0dbdb2"} Mar 08 21:58:50.054282 master-0 kubenswrapper[7480]: I0308 21:58:50.054216 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9"} Mar 08 21:58:50.061027 master-0 kubenswrapper[7480]: I0308 21:58:50.060961 7480 generic.go:334] "Generic (PLEG): container finished" podID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerID="2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256" exitCode=0 Mar 08 21:58:50.061858 master-0 kubenswrapper[7480]: I0308 21:58:50.061812 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrxj" event={"ID":"74d0aed3-8d57-472f-a48a-14ac41d6575f","Type":"ContainerDied","Data":"2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256"} Mar 08 21:58:50.361975 master-0 kubenswrapper[7480]: I0308 21:58:50.361902 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:50.401811 master-0 kubenswrapper[7480]: I0308 21:58:50.401755 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtb97\" (UniqueName: \"kubernetes.io/projected/18d5d11d-3d01-448f-b34e-55ebc772f5e8-kube-api-access-xtb97\") pod \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " Mar 08 21:58:50.402014 master-0 kubenswrapper[7480]: I0308 21:58:50.401847 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-catalog-content\") pod \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " Mar 08 21:58:50.402014 master-0 kubenswrapper[7480]: I0308 21:58:50.401901 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-utilities\") pod \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\" (UID: \"18d5d11d-3d01-448f-b34e-55ebc772f5e8\") " Mar 08 21:58:50.403681 master-0 kubenswrapper[7480]: I0308 21:58:50.403629 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-utilities" (OuterVolumeSpecName: "utilities") pod "18d5d11d-3d01-448f-b34e-55ebc772f5e8" (UID: "18d5d11d-3d01-448f-b34e-55ebc772f5e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 21:58:50.413153 master-0 kubenswrapper[7480]: I0308 21:58:50.413046 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d5d11d-3d01-448f-b34e-55ebc772f5e8-kube-api-access-xtb97" (OuterVolumeSpecName: "kube-api-access-xtb97") pod "18d5d11d-3d01-448f-b34e-55ebc772f5e8" (UID: "18d5d11d-3d01-448f-b34e-55ebc772f5e8"). InnerVolumeSpecName "kube-api-access-xtb97". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:50.474097 master-0 kubenswrapper[7480]: I0308 21:58:50.474050 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 21:58:50.487042 master-0 kubenswrapper[7480]: I0308 21:58:50.486966 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18d5d11d-3d01-448f-b34e-55ebc772f5e8" (UID: "18d5d11d-3d01-448f-b34e-55ebc772f5e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 21:58:50.503125 master-0 kubenswrapper[7480]: I0308 21:58:50.503016 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtb97\" (UniqueName: \"kubernetes.io/projected/18d5d11d-3d01-448f-b34e-55ebc772f5e8-kube-api-access-xtb97\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:50.503125 master-0 kubenswrapper[7480]: I0308 21:58:50.503063 7480 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:50.503125 master-0 kubenswrapper[7480]: I0308 21:58:50.503125 7480 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d5d11d-3d01-448f-b34e-55ebc772f5e8-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:50.606913 master-0 kubenswrapper[7480]: I0308 21:58:50.603957 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kubelet-dir\") pod \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " Mar 08 21:58:50.606913 master-0 kubenswrapper[7480]: I0308 21:58:50.604110 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-var-lock\") pod \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " Mar 08 21:58:50.606913 master-0 kubenswrapper[7480]: I0308 21:58:50.606127 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kube-api-access\") pod \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\" (UID: \"57a34dbc-eb6d-44f5-b1aa-4762b69382ed\") " Mar 08 21:58:50.606913 master-0 kubenswrapper[7480]: I0308 21:58:50.606376 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "57a34dbc-eb6d-44f5-b1aa-4762b69382ed" (UID: "57a34dbc-eb6d-44f5-b1aa-4762b69382ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:50.608219 master-0 kubenswrapper[7480]: I0308 21:58:50.607028 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-var-lock" (OuterVolumeSpecName: "var-lock") pod "57a34dbc-eb6d-44f5-b1aa-4762b69382ed" (UID: "57a34dbc-eb6d-44f5-b1aa-4762b69382ed"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:50.612196 master-0 kubenswrapper[7480]: I0308 21:58:50.612132 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "57a34dbc-eb6d-44f5-b1aa-4762b69382ed" (UID: "57a34dbc-eb6d-44f5-b1aa-4762b69382ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 21:58:50.708624 master-0 kubenswrapper[7480]: I0308 21:58:50.708486 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:50.708982 master-0 kubenswrapper[7480]: I0308 21:58:50.708729 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:50.708982 master-0 kubenswrapper[7480]: I0308 21:58:50.708855 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a34dbc-eb6d-44f5-b1aa-4762b69382ed-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:51.076260 master-0 kubenswrapper[7480]: I0308 21:58:51.076187 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8k5md" event={"ID":"18d5d11d-3d01-448f-b34e-55ebc772f5e8","Type":"ContainerDied","Data":"8ab8f2e9850b184f21d02d18d922bb80d4a105657156f3e3896899fd2c2b2c8d"} Mar 08 21:58:51.076260 master-0 kubenswrapper[7480]: I0308 21:58:51.076261 7480 scope.go:117] "RemoveContainer" containerID="90b4f897f8f9b9eba77267fc234acf3af0daac8bfb7169a47286a11ecb3c5e01" Mar 08 21:58:51.077010 master-0 kubenswrapper[7480]: I0308 21:58:51.076369 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8k5md" Mar 08 21:58:51.079119 master-0 kubenswrapper[7480]: I0308 21:58:51.078967 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"57a34dbc-eb6d-44f5-b1aa-4762b69382ed","Type":"ContainerDied","Data":"acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2"} Mar 08 21:58:51.079208 master-0 kubenswrapper[7480]: I0308 21:58:51.079062 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2" Mar 08 21:58:51.079208 master-0 kubenswrapper[7480]: I0308 21:58:51.078996 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 21:58:51.134304 master-0 kubenswrapper[7480]: I0308 21:58:51.134251 7480 scope.go:117] "RemoveContainer" containerID="3b5430452bb2f26a5f4205484f896625833ba1cf6fded222ed84481fe9140384" Mar 08 21:58:51.609645 master-0 kubenswrapper[7480]: E0308 21:58:51.608611 7480 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 08 21:58:52.087342 master-0 kubenswrapper[7480]: I0308 21:58:52.087278 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7p5f" event={"ID":"5857b3d0-0865-4ffd-bcc9-3c139c575209","Type":"ContainerStarted","Data":"a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4"} Mar 08 21:58:52.092962 master-0 kubenswrapper[7480]: I0308 21:58:52.092909 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrxj" event={"ID":"74d0aed3-8d57-472f-a48a-14ac41d6575f","Type":"ContainerStarted","Data":"f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208"} Mar 08 21:58:52.097477 master-0 kubenswrapper[7480]: I0308 21:58:52.097431 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerStarted","Data":"70db3f8570e6da2164b211258ae4e0d90fa0917b0d814ee5c4b2fc4c910cafda"} Mar 08 21:58:52.156319 master-0 kubenswrapper[7480]: E0308 21:58:52.156233 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:58:52.842906 master-0 kubenswrapper[7480]: I0308 21:58:52.842362 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 21:58:53.049550 master-0 kubenswrapper[7480]: I0308 21:58:53.049480 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:58:53.808280 master-0 kubenswrapper[7480]: I0308 21:58:53.808199 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:53.809226 master-0 kubenswrapper[7480]: I0308 21:58:53.808446 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:53.878904 master-0 kubenswrapper[7480]: I0308 21:58:53.878821 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:58:55.128954 master-0 kubenswrapper[7480]: I0308 21:58:55.128851 7480 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="f40be1d4a754000339d3870a29f35b23044b2b81588631c57cf192ab4e70d6fd" exitCode=0 Mar 08 21:58:55.191198 master-0 kubenswrapper[7480]: I0308 21:58:55.191024 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:55.191198 master-0 kubenswrapper[7480]: I0308 21:58:55.191194 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:55.243646 master-0 kubenswrapper[7480]: I0308 21:58:55.243570 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:56.189370 master-0 kubenswrapper[7480]: I0308 21:58:56.189259 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 21:58:56.580212 master-0 kubenswrapper[7480]: I0308 21:58:56.580086 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:56.580212 master-0 kubenswrapper[7480]: I0308 21:58:56.580189 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:58:57.636348 master-0 kubenswrapper[7480]: I0308 21:58:57.636248 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w7wm" podUID="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" containerName="registry-server" probeResult="failure" output=< Mar 08 21:58:57.636348 master-0 kubenswrapper[7480]: timeout: failed to connect service ":50051" within 1s Mar 08 21:58:57.636348 master-0 kubenswrapper[7480]: > Mar 08 21:58:58.156222 master-0 kubenswrapper[7480]: I0308 21:58:58.156155 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 08 21:58:58.156403 master-0 kubenswrapper[7480]: I0308 21:58:58.156228 7480 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="ca95d22d6228d434ce4ed2f415b15a00e7effc076e30de148f0569774a6d01db" exitCode=137 Mar 08 21:58:58.204432 master-0 kubenswrapper[7480]: I0308 21:58:58.204370 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 08 21:58:58.204714 master-0 kubenswrapper[7480]: I0308 21:58:58.204494 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:58:58.225825 master-0 kubenswrapper[7480]: I0308 21:58:58.225752 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 08 21:58:58.225825 master-0 kubenswrapper[7480]: I0308 21:58:58.225812 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 08 21:58:58.226263 master-0 kubenswrapper[7480]: I0308 21:58:58.225942 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:58.226263 master-0 kubenswrapper[7480]: I0308 21:58:58.226043 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 21:58:58.226444 master-0 kubenswrapper[7480]: I0308 21:58:58.226304 7480 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:58.226444 master-0 kubenswrapper[7480]: I0308 21:58:58.226325 7480 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 21:58:59.031555 master-0 kubenswrapper[7480]: I0308 21:58:59.031434 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 21:58:59.175538 master-0 kubenswrapper[7480]: I0308 21:58:59.175413 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 08 21:58:59.175538 master-0 kubenswrapper[7480]: I0308 21:58:59.175537 7480 scope.go:117] "RemoveContainer" containerID="f40be1d4a754000339d3870a29f35b23044b2b81588631c57cf192ab4e70d6fd" Mar 08 21:58:59.175997 master-0 kubenswrapper[7480]: I0308 21:58:59.175837 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:58:59.193268 master-0 kubenswrapper[7480]: I0308 21:58:59.193231 7480 scope.go:117] "RemoveContainer" containerID="ca95d22d6228d434ce4ed2f415b15a00e7effc076e30de148f0569774a6d01db" Mar 08 21:58:59.794291 master-0 kubenswrapper[7480]: I0308 21:58:59.794212 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 08 21:58:59.794843 master-0 kubenswrapper[7480]: I0308 21:58:59.794801 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 21:59:01.609336 master-0 kubenswrapper[7480]: E0308 21:59:01.609171 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:01.988198 master-0 kubenswrapper[7480]: E0308 21:59:01.987886 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189afc9226c21c1c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:58:27.969391644 +0000 UTC m=+58.423012246,LastTimestamp:2026-03-08 21:58:27.969391644 +0000 UTC m=+58.423012246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:59:01.989470 master-0 kubenswrapper[7480]: E0308 21:59:01.989411 7480 projected.go:194] Error preparing data for projected volume kube-api-access-sdfls for pod openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:01.989581 master-0 kubenswrapper[7480]: E0308 21:59:01.989540 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls podName:c228b17c-fd7b-4273-ac03-eac5d4a3a4ad nodeName:}" failed. No retries permitted until 2026-03-08 21:59:02.489507608 +0000 UTC m=+92.943128250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sdfls" (UniqueName: "kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls") pod "cluster-storage-operator-6fbfc8dc8f-p68k6" (UID: "c228b17c-fd7b-4273-ac03-eac5d4a3a4ad") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:02.031698 master-0 kubenswrapper[7480]: I0308 21:59:02.031595 7480 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:02.092568 master-0 kubenswrapper[7480]: E0308 21:59:02.092446 7480 projected.go:194] Error preparing data for projected volume kube-api-access-zj5rx for pod openshift-marketplace/community-operators-47cmq: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:02.092846 master-0 kubenswrapper[7480]: E0308 21:59:02.092683 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx podName:89619d97-2c16-4e76-ba80-8b519f6a9366 nodeName:}" failed. No retries permitted until 2026-03-08 21:59:02.592615326 +0000 UTC m=+93.046235968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zj5rx" (UniqueName: "kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx") pod "community-operators-47cmq" (UID: "89619d97-2c16-4e76-ba80-8b519f6a9366") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:02.157338 master-0 kubenswrapper[7480]: E0308 21:59:02.157056 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:02.229746 master-0 kubenswrapper[7480]: I0308 21:59:02.229659 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-x8jg8_d4d01185-e485-4697-92c2-31a044f25d82/openshift-controller-manager-operator/1.log" Mar 08 21:59:02.229746 master-0 kubenswrapper[7480]: I0308 21:59:02.229747 7480 generic.go:334] "Generic (PLEG): container finished" podID="d4d01185-e485-4697-92c2-31a044f25d82" containerID="2f8d7fcda4e6f52fa1e1bae05fb59e3135aaa4a13581f1a085c1284cb2c0e356" exitCode=1 Mar 08 21:59:02.588934 master-0 kubenswrapper[7480]: I0308 21:59:02.588777 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:59:02.693395 master-0 kubenswrapper[7480]: I0308 21:59:02.693297 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:59:03.048183 master-0 kubenswrapper[7480]: E0308 21:59:03.047927 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 21:59:04.246827 master-0 kubenswrapper[7480]: I0308 21:59:04.246732 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489" exitCode=0 Mar 08 21:59:07.268437 master-0 kubenswrapper[7480]: I0308 21:59:07.268359 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0/installer/0.log" Mar 08 21:59:07.269290 master-0 kubenswrapper[7480]: I0308 21:59:07.268469 7480 generic.go:334] "Generic (PLEG): container finished" podID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerID="23ca4cac0c50a9d156ec6ed1b11f780e700b2306444f16b3646285a8a0f6b21b" exitCode=1 Mar 08 21:59:11.610585 master-0 kubenswrapper[7480]: E0308 21:59:11.610243 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:12.032088 master-0 kubenswrapper[7480]: I0308 21:59:12.031866 7480 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:12.158565 master-0 kubenswrapper[7480]: E0308 21:59:12.158511 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:17.254650 master-0 kubenswrapper[7480]: E0308 21:59:17.254549 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 21:59:17.344605 master-0 kubenswrapper[7480]: I0308 21:59:17.344519 7480 generic.go:334] "Generic (PLEG): container finished" podID="a8e00c74-fb72-4e3d-a22c-c38a4772a813" containerID="334ebc87bbf952673cd1b3477f45396aaf813413e807f2bdfa8f48d87bc817d9" exitCode=0 Mar 08 21:59:18.361824 master-0 kubenswrapper[7480]: I0308 21:59:18.361721 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa" exitCode=0 Mar 08 21:59:19.369431 master-0 kubenswrapper[7480]: I0308 21:59:19.369341 7480 generic.go:334] "Generic (PLEG): container finished" podID="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" containerID="00c5ed3578644c2cfcf3b05743187fa1a4e66cf46b816a9e956e779028d0b36b" exitCode=0 Mar 08 21:59:21.386762 master-0 kubenswrapper[7480]: I0308 21:59:21.386667 7480 generic.go:334] "Generic (PLEG): container finished" podID="971ffa86-4d52-4dc3-ba28-03d116ec3494" containerID="6df6f113522fa49700aeaebc115d4f7bc3c6c606f1453723e6b3427085f53838" exitCode=0 Mar 08 21:59:21.612017 master-0 kubenswrapper[7480]: E0308 21:59:21.611902 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:21.612731 master-0 kubenswrapper[7480]: I0308 21:59:21.612643 7480 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 21:59:22.031403 master-0 kubenswrapper[7480]: I0308 21:59:22.031302 7480 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:22.159710 master-0 kubenswrapper[7480]: E0308 21:59:22.159589 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:22.159710 master-0 kubenswrapper[7480]: E0308 21:59:22.159659 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 21:59:23.401943 master-0 kubenswrapper[7480]: I0308 21:59:23.401863 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/0.log" Mar 08 21:59:23.402937 master-0 kubenswrapper[7480]: I0308 21:59:23.402900 7480 generic.go:334] "Generic (PLEG): container finished" podID="dfe625a1-5ba4-491f-9ab3-5d91154961a0" containerID="73a8f9d32fb6d4973561166a1225ead4683b3110d97d82f0bed60b3b5a68361b" exitCode=1 Mar 08 21:59:24.412053 master-0 kubenswrapper[7480]: I0308 21:59:24.411971 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-znt8q_a21e2296-10cb-4c70-ac3e-2173d35faac4/network-operator/0.log" Mar 08 21:59:24.412053 master-0 kubenswrapper[7480]: I0308 21:59:24.412036 7480 generic.go:334] "Generic (PLEG): container finished" podID="a21e2296-10cb-4c70-ac3e-2173d35faac4" containerID="33e74f7c7bc9716ac9cd2cfb19a68cc948644c1413dc78e99dffc063fbe5f927" exitCode=255 Mar 08 21:59:26.426951 master-0 kubenswrapper[7480]: I0308 21:59:26.426726 7480 generic.go:334] "Generic (PLEG): container finished" podID="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" containerID="2372290458f059a617f7c34963da0c908f74ff47559433f117b121db9f6a2646" exitCode=0 Mar 08 21:59:26.431889 master-0 kubenswrapper[7480]: I0308 21:59:26.431721 7480 generic.go:334] "Generic (PLEG): container finished" podID="b849f992-1020-4633-98be-75705b962fa9" containerID="c086cbd7303ffe955bb2645d06594a1046769c847ec0d61ce7c507a7b2e3ee42" exitCode=0 Mar 08 21:59:26.434144 master-0 kubenswrapper[7480]: I0308 21:59:26.434004 7480 generic.go:334] "Generic (PLEG): container finished" podID="4382d186-34e4-40af-9b92-bb17ddcaa23f" containerID="939aa1886a91ab1eb51e8a1cf13c57622098c7bede001e5d513bea76546b85fa" exitCode=0 Mar 08 21:59:28.080535 master-0 kubenswrapper[7480]: I0308 21:59:28.080454 7480 status_manager.go:851] "Failed to get status for pod" podUID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 08 21:59:28.362353 master-0 kubenswrapper[7480]: E0308 21:59:28.362288 7480 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 08 21:59:28.362353 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452" Netns:"/var/run/netns/ab66b324-18d3-49be-a7bb-91f00cf3c2fa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 21:59:28.362353 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 21:59:28.362353 master-0 kubenswrapper[7480]: > Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: E0308 21:59:28.362394 7480 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452" Netns:"/var/run/netns/ab66b324-18d3-49be-a7bb-91f00cf3c2fa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: E0308 21:59:28.362423 7480 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452" Netns:"/var/run/netns/ab66b324-18d3-49be-a7bb-91f00cf3c2fa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:59:28.362556 master-0 kubenswrapper[7480]: E0308 21:59:28.362496 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api(d9fe466f-5a23-4f69-8a96-44bd5d6194f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api(d9fe466f-5a23-4f69-8a96-44bd5d6194f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452\\\" Netns:\\\"/var/run/netns/ab66b324-18d3-49be-a7bb-91f00cf3c2fa\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=bf7b1132618d1c48679a37a92c47da904a94bad73427003b42e475870a685452;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" podUID="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Mar 08 21:59:28.446788 master-0 kubenswrapper[7480]: I0308 21:59:28.446721 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:59:28.447714 master-0 kubenswrapper[7480]: I0308 21:59:28.447672 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: E0308 21:59:28.716048 7480 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e" Netns:"/var/run/netns/d8fc7072-093b-4be0-90de-1920cf492fd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: > Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: E0308 21:59:28.716170 7480 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e" Netns:"/var/run/netns/d8fc7072-093b-4be0-90de-1920cf492fd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: > pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: E0308 21:59:28.716196 7480 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e" Netns:"/var/run/netns/d8fc7072-093b-4be0-90de-1920cf492fd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 21:59:28.716208 master-0 kubenswrapper[7480]: > pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:59:28.716725 master-0 kubenswrapper[7480]: E0308 21:59:28.716278 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-8f89dfddd-fn4ck_openshift-insights(66e50eed-e3ac-431f-931b-7c4c848c491b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-8f89dfddd-fn4ck_openshift-insights(66e50eed-e3ac-431f-931b-7c4c848c491b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e\\\" Netns:\\\"/var/run/netns/d8fc7072-093b-4be0-90de-1920cf492fd5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=d5eee1757c747f4ee6de9711fff753b387d8fddf7ccd0e75be5bb7e73848d31e;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" podUID="66e50eed-e3ac-431f-931b-7c4c848c491b" Mar 08 21:59:29.452650 master-0 kubenswrapper[7480]: I0308 21:59:29.452569 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:59:29.453547 master-0 kubenswrapper[7480]: I0308 21:59:29.453341 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 21:59:31.469910 master-0 kubenswrapper[7480]: I0308 21:59:31.469802 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/0.log" Mar 08 21:59:31.469910 master-0 kubenswrapper[7480]: I0308 21:59:31.469889 7480 generic.go:334] "Generic (PLEG): container finished" podID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" containerID="6edcb8198a1dd9b552f9d5577953c53700190a2b87b4307329abfdbc057033f6" exitCode=1 Mar 08 21:59:31.471963 master-0 kubenswrapper[7480]: I0308 21:59:31.471885 7480 generic.go:334] "Generic (PLEG): container finished" podID="f6fbc12f-3c27-4a7a-933f-43a55c960335" containerID="fa11530abd773575590a911f848030e060ab34b160f17f0ed7e7dadcd26f2550" exitCode=0 Mar 08 21:59:31.613445 master-0 kubenswrapper[7480]: E0308 21:59:31.613308 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 08 21:59:33.799389 master-0 kubenswrapper[7480]: E0308 21:59:33.799282 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 21:59:33.800384 master-0 kubenswrapper[7480]: E0308 21:59:33.799638 7480 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Mar 08 21:59:33.800384 master-0 kubenswrapper[7480]: I0308 21:59:33.799840 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 21:59:33.800384 master-0 kubenswrapper[7480]: I0308 21:59:33.800031 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 21:59:33.811620 master-0 kubenswrapper[7480]: I0308 21:59:33.811554 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 21:59:35.991277 master-0 kubenswrapper[7480]: E0308 21:59:35.990899 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{installer-2-master-0.189afc9227c71a09 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-2-master-0,UID:78dc543f-66ed-4098-b5a9-699ec2ccc856,APIVersion:v1,ResourceVersion:9036,FieldPath:spec.containers{installer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:58:27.986496009 +0000 UTC m=+58.440116611,LastTimestamp:2026-03-08 21:58:27.986496009 +0000 UTC m=+58.440116611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 21:59:36.592885 master-0 kubenswrapper[7480]: E0308 21:59:36.592792 7480 projected.go:194] Error preparing data for projected volume kube-api-access-sdfls for pod openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:36.593250 master-0 kubenswrapper[7480]: E0308 21:59:36.592964 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls podName:c228b17c-fd7b-4273-ac03-eac5d4a3a4ad nodeName:}" failed. No retries permitted until 2026-03-08 21:59:37.592921449 +0000 UTC m=+128.046542091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sdfls" (UniqueName: "kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls") pod "cluster-storage-operator-6fbfc8dc8f-p68k6" (UID: "c228b17c-fd7b-4273-ac03-eac5d4a3a4ad") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:36.697172 master-0 kubenswrapper[7480]: E0308 21:59:36.697034 7480 projected.go:194] Error preparing data for projected volume kube-api-access-zj5rx for pod openshift-marketplace/community-operators-47cmq: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:36.697510 master-0 kubenswrapper[7480]: E0308 21:59:36.697199 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx podName:89619d97-2c16-4e76-ba80-8b519f6a9366 nodeName:}" failed. No retries permitted until 2026-03-08 21:59:37.697170626 +0000 UTC m=+128.150791228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zj5rx" (UniqueName: "kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx") pod "community-operators-47cmq" (UID: "89619d97-2c16-4e76-ba80-8b519f6a9366") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 21:59:37.632585 master-0 kubenswrapper[7480]: I0308 21:59:37.632455 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 21:59:37.734496 master-0 kubenswrapper[7480]: I0308 21:59:37.734403 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 21:59:37.768740 master-0 kubenswrapper[7480]: I0308 21:59:37.768607 7480 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-8h8fx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 08 21:59:37.768922 master-0 kubenswrapper[7480]: I0308 21:59:37.768757 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" podUID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 08 21:59:38.713713 master-0 kubenswrapper[7480]: I0308 21:59:38.713611 7480 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-bh88w container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 08 21:59:38.713713 master-0 kubenswrapper[7480]: I0308 21:59:38.713690 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" podUID="4382d186-34e4-40af-9b92-bb17ddcaa23f" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 08 21:59:40.545700 master-0 kubenswrapper[7480]: I0308 21:59:40.545596 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c633355a-b323-4458-8ecb-1e490d115f59/installer/0.log" Mar 08 21:59:40.545700 master-0 kubenswrapper[7480]: I0308 21:59:40.545702 7480 generic.go:334] "Generic (PLEG): container finished" podID="c633355a-b323-4458-8ecb-1e490d115f59" containerID="28682516e11b7da515d28696337779453c2c96bd4cf9bfd8a8b3aa00aef7307b" exitCode=1 Mar 08 21:59:41.815290 master-0 kubenswrapper[7480]: E0308 21:59:41.815129 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 08 21:59:42.350995 master-0 kubenswrapper[7480]: E0308 21:59:42.350836 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:59:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:59:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:59:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T21:59:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d\\\"],\\\"sizeBytes\\\":467234714},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:47.768941 master-0 kubenswrapper[7480]: I0308 21:59:47.768809 7480 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-8h8fx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 08 21:59:47.768941 master-0 kubenswrapper[7480]: I0308 21:59:47.768938 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" podUID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 08 21:59:50.641432 master-0 kubenswrapper[7480]: I0308 21:59:50.641357 7480 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb" exitCode=1 Mar 08 21:59:52.216329 master-0 kubenswrapper[7480]: E0308 21:59:52.216159 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 08 21:59:52.352694 master-0 kubenswrapper[7480]: E0308 21:59:52.352545 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 21:59:57.768745 master-0 kubenswrapper[7480]: I0308 21:59:57.768666 7480 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-8h8fx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 08 21:59:57.769740 master-0 kubenswrapper[7480]: I0308 21:59:57.768785 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" podUID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 08 22:00:02.354476 master-0 kubenswrapper[7480]: E0308 22:00:02.354359 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:00:03.018445 master-0 kubenswrapper[7480]: E0308 22:00:03.018337 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 08 22:00:07.815277 master-0 kubenswrapper[7480]: E0308 22:00:07.815139 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 22:00:07.816549 master-0 kubenswrapper[7480]: E0308 22:00:07.815475 7480 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.015s" Mar 08 22:00:07.816549 master-0 kubenswrapper[7480]: I0308 22:00:07.815518 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:00:07.816549 master-0 kubenswrapper[7480]: I0308 22:00:07.815558 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerDied","Data":"2f8d7fcda4e6f52fa1e1bae05fb59e3135aaa4a13581f1a085c1284cb2c0e356"} Mar 08 22:00:07.816549 master-0 kubenswrapper[7480]: I0308 22:00:07.815598 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489"} Mar 08 22:00:07.816549 master-0 kubenswrapper[7480]: I0308 22:00:07.815622 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:00:07.816549 master-0 kubenswrapper[7480]: I0308 22:00:07.815641 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0","Type":"ContainerDied","Data":"23ca4cac0c50a9d156ec6ed1b11f780e700b2306444f16b3646285a8a0f6b21b"} Mar 08 22:00:07.817356 master-0 kubenswrapper[7480]: I0308 22:00:07.817221 7480 scope.go:117] "RemoveContainer" containerID="6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb" Mar 08 22:00:07.817500 master-0 kubenswrapper[7480]: I0308 22:00:07.817452 7480 scope.go:117] "RemoveContainer" containerID="2f8d7fcda4e6f52fa1e1bae05fb59e3135aaa4a13581f1a085c1284cb2c0e356" Mar 08 22:00:07.817572 master-0 kubenswrapper[7480]: I0308 22:00:07.817486 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"a33aa7650397c6fcbc3db8208664515afb6c26ede2b1533a472f078a2d4a0ea4"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 08 22:00:07.817642 master-0 kubenswrapper[7480]: I0308 22:00:07.817609 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" podUID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerName="authentication-operator" containerID="cri-o://a33aa7650397c6fcbc3db8208664515afb6c26ede2b1533a472f078a2d4a0ea4" gracePeriod=30 Mar 08 22:00:07.829199 master-0 kubenswrapper[7480]: I0308 22:00:07.829128 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 22:00:08.755425 master-0 kubenswrapper[7480]: I0308 22:00:08.755350 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-x8jg8_d4d01185-e485-4697-92c2-31a044f25d82/openshift-controller-manager-operator/1.log" Mar 08 22:00:09.084674 master-0 kubenswrapper[7480]: I0308 22:00:09.084599 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0/installer/0.log" Mar 08 22:00:09.085610 master-0 kubenswrapper[7480]: I0308 22:00:09.084699 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 22:00:09.148713 master-0 kubenswrapper[7480]: I0308 22:00:09.148623 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-var-lock\") pod \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " Mar 08 22:00:09.148713 master-0 kubenswrapper[7480]: I0308 22:00:09.148724 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kube-api-access\") pod \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " Mar 08 22:00:09.149061 master-0 kubenswrapper[7480]: I0308 22:00:09.148775 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kubelet-dir\") pod \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\" (UID: \"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0\") " Mar 08 22:00:09.149061 master-0 kubenswrapper[7480]: I0308 22:00:09.148860 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-var-lock" (OuterVolumeSpecName: "var-lock") pod "147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" (UID: "147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:00:09.149061 master-0 kubenswrapper[7480]: I0308 22:00:09.148960 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" (UID: "147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:00:09.149191 master-0 kubenswrapper[7480]: I0308 22:00:09.149151 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:09.149191 master-0 kubenswrapper[7480]: I0308 22:00:09.149173 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:09.152366 master-0 kubenswrapper[7480]: I0308 22:00:09.152330 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" (UID: "147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:00:09.250787 master-0 kubenswrapper[7480]: I0308 22:00:09.250709 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:09.767493 master-0 kubenswrapper[7480]: I0308 22:00:09.767429 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0/installer/0.log" Mar 08 22:00:09.767807 master-0 kubenswrapper[7480]: I0308 22:00:09.767581 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 22:00:09.995667 master-0 kubenswrapper[7480]: E0308 22:00:09.995426 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{installer-2-master-0.189afc925ed1c1f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-2-master-0,UID:78dc543f-66ed-4098-b5a9-699ec2ccc856,APIVersion:v1,ResourceVersion:9036,FieldPath:spec.containers{installer},},Reason:Created,Message:Created container: installer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:58:28.90994124 +0000 UTC m=+59.363561842,LastTimestamp:2026-03-08 21:58:28.90994124 +0000 UTC m=+59.363561842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:00:11.636769 master-0 kubenswrapper[7480]: E0308 22:00:11.636613 7480 projected.go:194] Error preparing data for projected volume kube-api-access-sdfls for pod openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:11.636769 master-0 kubenswrapper[7480]: E0308 22:00:11.636741 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls podName:c228b17c-fd7b-4273-ac03-eac5d4a3a4ad nodeName:}" failed. No retries permitted until 2026-03-08 22:00:13.636705583 +0000 UTC m=+164.090326225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sdfls" (UniqueName: "kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls") pod "cluster-storage-operator-6fbfc8dc8f-p68k6" (UID: "c228b17c-fd7b-4273-ac03-eac5d4a3a4ad") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:11.738467 master-0 kubenswrapper[7480]: E0308 22:00:11.738319 7480 projected.go:194] Error preparing data for projected volume kube-api-access-zj5rx for pod openshift-marketplace/community-operators-47cmq: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:11.738467 master-0 kubenswrapper[7480]: E0308 22:00:11.738484 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx podName:89619d97-2c16-4e76-ba80-8b519f6a9366 nodeName:}" failed. No retries permitted until 2026-03-08 22:00:13.738452092 +0000 UTC m=+164.192072784 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zj5rx" (UniqueName: "kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx") pod "community-operators-47cmq" (UID: "89619d97-2c16-4e76-ba80-8b519f6a9366") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:11.782215 master-0 kubenswrapper[7480]: I0308 22:00:11.782135 7480 generic.go:334] "Generic (PLEG): container finished" podID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerID="a33aa7650397c6fcbc3db8208664515afb6c26ede2b1533a472f078a2d4a0ea4" exitCode=0 Mar 08 22:00:12.355098 master-0 kubenswrapper[7480]: E0308 22:00:12.354996 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:00:13.717154 master-0 kubenswrapper[7480]: I0308 22:00:13.717033 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:00:13.819504 master-0 kubenswrapper[7480]: I0308 22:00:13.819408 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:00:14.620288 master-0 kubenswrapper[7480]: E0308 22:00:14.619770 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 08 22:00:18.836314 master-0 kubenswrapper[7480]: I0308 22:00:18.836228 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/0.log" Mar 08 22:00:18.836314 master-0 kubenswrapper[7480]: I0308 22:00:18.836306 7480 generic.go:334] "Generic (PLEG): container finished" podID="c901b468-b8e9-48f8-8050-0d54e24e2adb" containerID="975d86808356450f32e152ee3c49e6ab2d8f04281755488f22f0b7506389bb2d" exitCode=1 Mar 08 22:00:19.846261 master-0 kubenswrapper[7480]: I0308 22:00:19.846213 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/0.log" Mar 08 22:00:19.847359 master-0 kubenswrapper[7480]: I0308 22:00:19.847308 7480 generic.go:334] "Generic (PLEG): container finished" podID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerID="5946b7f2d9d566068ae07c485f39d2cd8eea56a2d551b41eae667da0ce359cfb" exitCode=1 Mar 08 22:00:19.849738 master-0 kubenswrapper[7480]: I0308 22:00:19.849689 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/0.log" Mar 08 22:00:19.850425 master-0 kubenswrapper[7480]: I0308 22:00:19.850360 7480 generic.go:334] "Generic (PLEG): container finished" podID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerID="69b4132a818df716de03fdd12ebf683c551197394c831d762cb2338396e793c4" exitCode=1 Mar 08 22:00:20.825378 master-0 kubenswrapper[7480]: E0308 22:00:20.825252 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 22:00:21.498694 master-0 kubenswrapper[7480]: I0308 22:00:21.498576 7480 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-qv4bv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 08 22:00:21.499532 master-0 kubenswrapper[7480]: I0308 22:00:21.498700 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 08 22:00:21.499532 master-0 kubenswrapper[7480]: I0308 22:00:21.498752 7480 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-qv4bv container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 08 22:00:21.499532 master-0 kubenswrapper[7480]: I0308 22:00:21.498853 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 08 22:00:21.859934 master-0 kubenswrapper[7480]: I0308 22:00:21.859832 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:00:21.859934 master-0 kubenswrapper[7480]: I0308 22:00:21.859894 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:00:21.860369 master-0 kubenswrapper[7480]: I0308 22:00:21.859927 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:00:21.860369 master-0 kubenswrapper[7480]: I0308 22:00:21.860025 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:00:22.355909 master-0 kubenswrapper[7480]: E0308 22:00:22.355783 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:00:22.355909 master-0 kubenswrapper[7480]: E0308 22:00:22.355854 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:00:27.821935 master-0 kubenswrapper[7480]: E0308 22:00:27.821841 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 08 22:00:27.917189 master-0 kubenswrapper[7480]: I0308 22:00:27.917141 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/0.log" Mar 08 22:00:27.917315 master-0 kubenswrapper[7480]: I0308 22:00:27.917203 7480 generic.go:334] "Generic (PLEG): container finished" podID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" containerID="1a0df161078208a525b4d1fb6d4ca6198700570b496ec5545cc3b9587304d8a5" exitCode=1 Mar 08 22:00:28.087719 master-0 kubenswrapper[7480]: I0308 22:00:28.087551 7480 status_manager.go:851] "Failed to get status for pod" podUID="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" pod="openshift-marketplace/redhat-operators-8w7wm" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-operators-8w7wm)" Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: E0308 22:00:29.210688 7480 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33" Netns:"/var/run/netns/400a54d3-aabd-4492-b2f3-664268fd9129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: > Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: E0308 22:00:29.210783 7480 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33" Netns:"/var/run/netns/400a54d3-aabd-4492-b2f3-664268fd9129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: E0308 22:00:29.210814 7480 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33" Netns:"/var/run/netns/400a54d3-aabd-4492-b2f3-664268fd9129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:00:29.212498 master-0 kubenswrapper[7480]: E0308 22:00:29.210892 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api(d9fe466f-5a23-4f69-8a96-44bd5d6194f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api(d9fe466f-5a23-4f69-8a96-44bd5d6194f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-dvgxg_openshift-machine-api_d9fe466f-5a23-4f69-8a96-44bd5d6194f5_0(9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33\\\" Netns:\\\"/var/run/netns/400a54d3-aabd-4492-b2f3-664268fd9129\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-dvgxg;K8S_POD_INFRA_CONTAINER_ID=9acc1157ff6419984c9222f358a8b89a149b70dd33398836d26a60840e08ea33;K8S_POD_UID=d9fe466f-5a23-4f69-8a96-44bd5d6194f5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg/d9fe466f-5a23-4f69-8a96-44bd5d6194f5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-dvgxg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-dvgxg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" podUID="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" Mar 08 22:00:29.933840 master-0 kubenswrapper[7480]: I0308 22:00:29.933764 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_78dc543f-66ed-4098-b5a9-699ec2ccc856/installer/0.log" Mar 08 22:00:29.933840 master-0 kubenswrapper[7480]: I0308 22:00:29.933829 7480 generic.go:334] "Generic (PLEG): container finished" podID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerID="b72861ea5791b8527c79a3ba9ca252aad4949d7fe333b8f4afa8d681aa68f9d1" exitCode=1 Mar 08 22:00:30.167495 master-0 kubenswrapper[7480]: E0308 22:00:30.167435 7480 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 08 22:00:30.167495 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0" Netns:"/var/run/netns/34fd4414-70d8-4cea-b56b-020a58d1de31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 22:00:30.167495 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 22:00:30.167495 master-0 kubenswrapper[7480]: > Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: E0308 22:00:30.167535 7480 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0" Netns:"/var/run/netns/34fd4414-70d8-4cea-b56b-020a58d1de31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: > pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: E0308 22:00:30.167563 7480 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0" Netns:"/var/run/netns/34fd4414-70d8-4cea-b56b-020a58d1de31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: > pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:00:30.168211 master-0 kubenswrapper[7480]: E0308 22:00:30.167641 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-8f89dfddd-fn4ck_openshift-insights(66e50eed-e3ac-431f-931b-7c4c848c491b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-8f89dfddd-fn4ck_openshift-insights(66e50eed-e3ac-431f-931b-7c4c848c491b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-fn4ck_openshift-insights_66e50eed-e3ac-431f-931b-7c4c848c491b_0(eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0): error adding pod openshift-insights_insights-operator-8f89dfddd-fn4ck to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0\\\" Netns:\\\"/var/run/netns/34fd4414-70d8-4cea-b56b-020a58d1de31\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-fn4ck;K8S_POD_INFRA_CONTAINER_ID=eb730600d289ad145a341379b9915717e512347fe1009b23c4d2a21068bcdea0;K8S_POD_UID=66e50eed-e3ac-431f-931b-7c4c848c491b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-fn4ck] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-fn4ck/66e50eed-e3ac-431f-931b-7c4c848c491b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-fn4ck in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-fn4ck?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" podUID="66e50eed-e3ac-431f-931b-7c4c848c491b" Mar 08 22:00:30.874977 master-0 kubenswrapper[7480]: E0308 22:00:30.874786 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-sdfls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" podUID="c228b17c-fd7b-4273-ac03-eac5d4a3a4ad" Mar 08 22:00:30.919673 master-0 kubenswrapper[7480]: E0308 22:00:30.919511 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zj5rx], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/community-operators-47cmq" podUID="89619d97-2c16-4e76-ba80-8b519f6a9366" Mar 08 22:00:30.941055 master-0 kubenswrapper[7480]: I0308 22:00:30.940950 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:00:30.941055 master-0 kubenswrapper[7480]: I0308 22:00:30.940963 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:00:31.499218 master-0 kubenswrapper[7480]: I0308 22:00:31.499141 7480 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-qv4bv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 08 22:00:31.499652 master-0 kubenswrapper[7480]: I0308 22:00:31.499604 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 08 22:00:31.860252 master-0 kubenswrapper[7480]: I0308 22:00:31.860176 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:00:31.860252 master-0 kubenswrapper[7480]: I0308 22:00:31.860255 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:00:31.949124 master-0 kubenswrapper[7480]: I0308 22:00:31.949035 7480 generic.go:334] "Generic (PLEG): container finished" podID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerID="c7c62eecaac8f5df8b2da98122fad8c96cfc54251fbf2aa75a9ba067018db826" exitCode=0 Mar 08 22:00:36.636065 master-0 kubenswrapper[7480]: I0308 22:00:36.635971 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:00:36.640359 master-0 kubenswrapper[7480]: I0308 22:00:36.636126 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:00:36.640359 master-0 kubenswrapper[7480]: I0308 22:00:36.636013 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:00:36.640359 master-0 kubenswrapper[7480]: I0308 22:00:36.637151 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:00:38.713997 master-0 kubenswrapper[7480]: I0308 22:00:38.713889 7480 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-bh88w container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 08 22:00:38.713997 master-0 kubenswrapper[7480]: I0308 22:00:38.713982 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" podUID="4382d186-34e4-40af-9b92-bb17ddcaa23f" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 08 22:00:41.499214 master-0 kubenswrapper[7480]: I0308 22:00:41.499105 7480 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-qv4bv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 08 22:00:41.500118 master-0 kubenswrapper[7480]: I0308 22:00:41.499222 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 08 22:00:41.500118 master-0 kubenswrapper[7480]: I0308 22:00:41.499109 7480 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-qv4bv container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 08 22:00:41.500118 master-0 kubenswrapper[7480]: I0308 22:00:41.499346 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 08 22:00:41.833773 master-0 kubenswrapper[7480]: E0308 22:00:41.833672 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 22:00:41.834201 master-0 kubenswrapper[7480]: E0308 22:00:41.833929 7480 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Mar 08 22:00:41.834201 master-0 kubenswrapper[7480]: I0308 22:00:41.833973 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:00:41.834201 master-0 kubenswrapper[7480]: I0308 22:00:41.834055 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:00:41.834201 master-0 kubenswrapper[7480]: I0308 22:00:41.834142 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:00:41.834746 master-0 kubenswrapper[7480]: I0308 22:00:41.834690 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:00:41.835226 master-0 kubenswrapper[7480]: I0308 22:00:41.835141 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:00:41.836523 master-0 kubenswrapper[7480]: I0308 22:00:41.836482 7480 scope.go:117] "RemoveContainer" containerID="939aa1886a91ab1eb51e8a1cf13c57622098c7bede001e5d513bea76546b85fa" Mar 08 22:00:41.836917 master-0 kubenswrapper[7480]: I0308 22:00:41.836841 7480 scope.go:117] "RemoveContainer" containerID="fa11530abd773575590a911f848030e060ab34b160f17f0ed7e7dadcd26f2550" Mar 08 22:00:41.837056 master-0 kubenswrapper[7480]: I0308 22:00:41.836927 7480 scope.go:117] "RemoveContainer" containerID="c086cbd7303ffe955bb2645d06594a1046769c847ec0d61ce7c507a7b2e3ee42" Mar 08 22:00:41.837056 master-0 kubenswrapper[7480]: I0308 22:00:41.836995 7480 scope.go:117] "RemoveContainer" containerID="2372290458f059a617f7c34963da0c908f74ff47559433f117b121db9f6a2646" Mar 08 22:00:41.837327 master-0 kubenswrapper[7480]: I0308 22:00:41.837271 7480 scope.go:117] "RemoveContainer" containerID="69b4132a818df716de03fdd12ebf683c551197394c831d762cb2338396e793c4" Mar 08 22:00:41.837868 master-0 kubenswrapper[7480]: I0308 22:00:41.837802 7480 scope.go:117] "RemoveContainer" containerID="00c5ed3578644c2cfcf3b05743187fa1a4e66cf46b816a9e956e779028d0b36b" Mar 08 22:00:41.838023 master-0 kubenswrapper[7480]: I0308 22:00:41.837982 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:00:41.838321 master-0 kubenswrapper[7480]: I0308 22:00:41.838270 7480 scope.go:117] "RemoveContainer" containerID="334ebc87bbf952673cd1b3477f45396aaf813413e807f2bdfa8f48d87bc817d9" Mar 08 22:00:41.838593 master-0 kubenswrapper[7480]: I0308 22:00:41.838555 7480 scope.go:117] "RemoveContainer" containerID="6df6f113522fa49700aeaebc115d4f7bc3c6c606f1453723e6b3427085f53838" Mar 08 22:00:41.839508 master-0 kubenswrapper[7480]: I0308 22:00:41.839017 7480 scope.go:117] "RemoveContainer" containerID="73a8f9d32fb6d4973561166a1225ead4683b3110d97d82f0bed60b3b5a68361b" Mar 08 22:00:41.839508 master-0 kubenswrapper[7480]: I0308 22:00:41.839042 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:00:41.839508 master-0 kubenswrapper[7480]: I0308 22:00:41.839397 7480 scope.go:117] "RemoveContainer" containerID="33e74f7c7bc9716ac9cd2cfb19a68cc948644c1413dc78e99dffc063fbe5f927" Mar 08 22:00:41.840235 master-0 kubenswrapper[7480]: I0308 22:00:41.840203 7480 scope.go:117] "RemoveContainer" containerID="6edcb8198a1dd9b552f9d5577953c53700190a2b87b4307329abfdbc057033f6" Mar 08 22:00:41.858390 master-0 kubenswrapper[7480]: I0308 22:00:41.857843 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 22:00:41.860047 master-0 kubenswrapper[7480]: I0308 22:00:41.859994 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:00:41.860376 master-0 kubenswrapper[7480]: I0308 22:00:41.860327 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:00:41.860552 master-0 kubenswrapper[7480]: I0308 22:00:41.860166 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:00:41.860664 master-0 kubenswrapper[7480]: I0308 22:00:41.860584 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:00:42.390442 master-0 kubenswrapper[7480]: I0308 22:00:42.390380 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_78dc543f-66ed-4098-b5a9-699ec2ccc856/installer/0.log" Mar 08 22:00:42.390550 master-0 kubenswrapper[7480]: I0308 22:00:42.390528 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 22:00:42.422866 master-0 kubenswrapper[7480]: E0308 22:00:42.422729 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:00:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:00:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:00:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:00:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d\\\"],\\\"sizeBytes\\\":467234714},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:00:42.450298 master-0 kubenswrapper[7480]: I0308 22:00:42.450247 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c633355a-b323-4458-8ecb-1e490d115f59/installer/0.log" Mar 08 22:00:42.450362 master-0 kubenswrapper[7480]: I0308 22:00:42.450320 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.549862 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c633355a-b323-4458-8ecb-1e490d115f59-kube-api-access\") pod \"c633355a-b323-4458-8ecb-1e490d115f59\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.549959 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-var-lock\") pod \"c633355a-b323-4458-8ecb-1e490d115f59\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.549999 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-kubelet-dir\") pod \"c633355a-b323-4458-8ecb-1e490d115f59\" (UID: \"c633355a-b323-4458-8ecb-1e490d115f59\") " Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.550043 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-var-lock\") pod \"78dc543f-66ed-4098-b5a9-699ec2ccc856\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.550055 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-var-lock" (OuterVolumeSpecName: "var-lock") pod "c633355a-b323-4458-8ecb-1e490d115f59" (UID: "c633355a-b323-4458-8ecb-1e490d115f59"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.550213 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-var-lock" (OuterVolumeSpecName: "var-lock") pod "78dc543f-66ed-4098-b5a9-699ec2ccc856" (UID: "78dc543f-66ed-4098-b5a9-699ec2ccc856"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.550221 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c633355a-b323-4458-8ecb-1e490d115f59" (UID: "c633355a-b323-4458-8ecb-1e490d115f59"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:00:42.550305 master-0 kubenswrapper[7480]: I0308 22:00:42.550284 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78dc543f-66ed-4098-b5a9-699ec2ccc856-kube-api-access\") pod \"78dc543f-66ed-4098-b5a9-699ec2ccc856\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " Mar 08 22:00:42.551482 master-0 kubenswrapper[7480]: I0308 22:00:42.550378 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-kubelet-dir\") pod \"78dc543f-66ed-4098-b5a9-699ec2ccc856\" (UID: \"78dc543f-66ed-4098-b5a9-699ec2ccc856\") " Mar 08 22:00:42.551482 master-0 kubenswrapper[7480]: I0308 22:00:42.550515 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "78dc543f-66ed-4098-b5a9-699ec2ccc856" (UID: "78dc543f-66ed-4098-b5a9-699ec2ccc856"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:00:42.551482 master-0 kubenswrapper[7480]: I0308 22:00:42.550790 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:42.551482 master-0 kubenswrapper[7480]: I0308 22:00:42.550805 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c633355a-b323-4458-8ecb-1e490d115f59-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:42.551482 master-0 kubenswrapper[7480]: I0308 22:00:42.550814 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:42.551482 master-0 kubenswrapper[7480]: I0308 22:00:42.550822 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78dc543f-66ed-4098-b5a9-699ec2ccc856-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:42.553479 master-0 kubenswrapper[7480]: I0308 22:00:42.553428 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78dc543f-66ed-4098-b5a9-699ec2ccc856-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "78dc543f-66ed-4098-b5a9-699ec2ccc856" (UID: "78dc543f-66ed-4098-b5a9-699ec2ccc856"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:00:42.553816 master-0 kubenswrapper[7480]: I0308 22:00:42.553767 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c633355a-b323-4458-8ecb-1e490d115f59-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c633355a-b323-4458-8ecb-1e490d115f59" (UID: "c633355a-b323-4458-8ecb-1e490d115f59"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:00:42.652199 master-0 kubenswrapper[7480]: I0308 22:00:42.651981 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78dc543f-66ed-4098-b5a9-699ec2ccc856-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:42.652199 master-0 kubenswrapper[7480]: I0308 22:00:42.652049 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c633355a-b323-4458-8ecb-1e490d115f59-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:00:43.049161 master-0 kubenswrapper[7480]: I0308 22:00:43.048956 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c633355a-b323-4458-8ecb-1e490d115f59/installer/0.log" Mar 08 22:00:43.049421 master-0 kubenswrapper[7480]: I0308 22:00:43.049186 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 22:00:43.062675 master-0 kubenswrapper[7480]: I0308 22:00:43.062627 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/0.log" Mar 08 22:00:43.067267 master-0 kubenswrapper[7480]: I0308 22:00:43.067216 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/0.log" Mar 08 22:00:43.073033 master-0 kubenswrapper[7480]: I0308 22:00:43.072980 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_78dc543f-66ed-4098-b5a9-699ec2ccc856/installer/0.log" Mar 08 22:00:43.073341 master-0 kubenswrapper[7480]: I0308 22:00:43.073298 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 22:00:43.082194 master-0 kubenswrapper[7480]: I0308 22:00:43.079725 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-znt8q_a21e2296-10cb-4c70-ac3e-2173d35faac4/network-operator/0.log" Mar 08 22:00:43.086445 master-0 kubenswrapper[7480]: I0308 22:00:43.086390 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/0.log" Mar 08 22:00:44.000346 master-0 kubenswrapper[7480]: E0308 22:00:43.999964 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{installer-2-master-0.189afc9265f4a04b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-2-master-0,UID:78dc543f-66ed-4098-b5a9-699ec2ccc856,APIVersion:v1,ResourceVersion:9036,FieldPath:spec.containers{installer},},Reason:Started,Message:Started container installer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:58:29.029666891 +0000 UTC m=+59.483287493,LastTimestamp:2026-03-08 21:58:29.029666891 +0000 UTC m=+59.483287493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:00:44.223849 master-0 kubenswrapper[7480]: E0308 22:00:44.223717 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:00:46.635496 master-0 kubenswrapper[7480]: I0308 22:00:46.635432 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:00:46.635496 master-0 kubenswrapper[7480]: I0308 22:00:46.635494 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:00:46.636533 master-0 kubenswrapper[7480]: I0308 22:00:46.635560 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:00:46.636533 master-0 kubenswrapper[7480]: I0308 22:00:46.635611 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:00:47.720691 master-0 kubenswrapper[7480]: E0308 22:00:47.720571 7480 projected.go:194] Error preparing data for projected volume kube-api-access-sdfls for pod openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:47.720691 master-0 kubenswrapper[7480]: E0308 22:00:47.720707 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls podName:c228b17c-fd7b-4273-ac03-eac5d4a3a4ad nodeName:}" failed. No retries permitted until 2026-03-08 22:00:51.720676014 +0000 UTC m=+202.174296646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sdfls" (UniqueName: "kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls") pod "cluster-storage-operator-6fbfc8dc8f-p68k6" (UID: "c228b17c-fd7b-4273-ac03-eac5d4a3a4ad") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:47.823261 master-0 kubenswrapper[7480]: E0308 22:00:47.823184 7480 projected.go:194] Error preparing data for projected volume kube-api-access-zj5rx for pod openshift-marketplace/community-operators-47cmq: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:47.823569 master-0 kubenswrapper[7480]: E0308 22:00:47.823355 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx podName:89619d97-2c16-4e76-ba80-8b519f6a9366 nodeName:}" failed. No retries permitted until 2026-03-08 22:00:51.823296736 +0000 UTC m=+202.276917348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zj5rx" (UniqueName: "kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx") pod "community-operators-47cmq" (UID: "89619d97-2c16-4e76-ba80-8b519f6a9366") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:00:51.793725 master-0 kubenswrapper[7480]: I0308 22:00:51.793596 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:00:51.861322 master-0 kubenswrapper[7480]: I0308 22:00:51.861246 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:00:51.861773 master-0 kubenswrapper[7480]: I0308 22:00:51.861726 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:00:51.895636 master-0 kubenswrapper[7480]: I0308 22:00:51.895587 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:00:52.031827 master-0 kubenswrapper[7480]: I0308 22:00:52.031697 7480 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:00:52.423764 master-0 kubenswrapper[7480]: E0308 22:00:52.423652 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:00:56.636153 master-0 kubenswrapper[7480]: I0308 22:00:56.635945 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:00:56.636153 master-0 kubenswrapper[7480]: I0308 22:00:56.636049 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:00:56.637282 master-0 kubenswrapper[7480]: I0308 22:00:56.636225 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:00:56.637282 master-0 kubenswrapper[7480]: I0308 22:00:56.636410 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:01:01.225867 master-0 kubenswrapper[7480]: E0308 22:01:01.225497 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:01:01.860757 master-0 kubenswrapper[7480]: I0308 22:01:01.860602 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:01:01.860757 master-0 kubenswrapper[7480]: I0308 22:01:01.860602 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:01:01.860757 master-0 kubenswrapper[7480]: I0308 22:01:01.860714 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:01:01.861505 master-0 kubenswrapper[7480]: I0308 22:01:01.860805 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:01:02.031730 master-0 kubenswrapper[7480]: I0308 22:01:02.031585 7480 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:01:02.424126 master-0 kubenswrapper[7480]: E0308 22:01:02.423972 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 22:01:03.240896 master-0 kubenswrapper[7480]: I0308 22:01:03.240804 7480 generic.go:334] "Generic (PLEG): container finished" podID="081acedd-4c88-461f-80f3-e80fdbadb725" containerID="aaa76f728d77c2984e519842ceb28a5273072cbb92bc05bafd70d63dc2b5a869" exitCode=0 Mar 08 22:01:06.635534 master-0 kubenswrapper[7480]: I0308 22:01:06.635472 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:01:06.636212 master-0 kubenswrapper[7480]: I0308 22:01:06.635564 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:01:09.031943 master-0 kubenswrapper[7480]: I0308 22:01:09.031696 7480 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 22:01:09.286619 master-0 kubenswrapper[7480]: I0308 22:01:09.286452 7480 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4" exitCode=1 Mar 08 22:01:09.288813 master-0 kubenswrapper[7480]: I0308 22:01:09.288765 7480 generic.go:334] "Generic (PLEG): container finished" podID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerID="04d2e0520d46f0208b4f81730f6d539f9f11e470a035dc08dbf06867ed1a4e14" exitCode=0 Mar 08 22:01:11.860303 master-0 kubenswrapper[7480]: I0308 22:01:11.860194 7480 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-nk294 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 08 22:01:11.861196 master-0 kubenswrapper[7480]: I0308 22:01:11.860329 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 08 22:01:12.425158 master-0 kubenswrapper[7480]: E0308 22:01:12.424864 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:01:13.992606 master-0 kubenswrapper[7480]: I0308 22:01:13.992511 7480 patch_prober.go:28] interesting pod/controller-manager-f7df5f5b-txsrq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 22:01:13.993240 master-0 kubenswrapper[7480]: I0308 22:01:13.992669 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 22:01:13.993240 master-0 kubenswrapper[7480]: I0308 22:01:13.992566 7480 patch_prober.go:28] interesting pod/controller-manager-f7df5f5b-txsrq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 22:01:13.993240 master-0 kubenswrapper[7480]: I0308 22:01:13.992794 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 22:01:14.325780 master-0 kubenswrapper[7480]: I0308 22:01:14.325714 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-955fcfb87-tn4pc_2c2c4964-678e-46ac-a500-8efc6b8255d9/machine-approver-controller/0.log" Mar 08 22:01:14.326957 master-0 kubenswrapper[7480]: I0308 22:01:14.326883 7480 generic.go:334] "Generic (PLEG): container finished" podID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerID="ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7" exitCode=255 Mar 08 22:01:14.329517 master-0 kubenswrapper[7480]: I0308 22:01:14.329483 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/0.log" Mar 08 22:01:14.329760 master-0 kubenswrapper[7480]: I0308 22:01:14.329728 7480 generic.go:334] "Generic (PLEG): container finished" podID="6eb502a1-db10-46ba-b698-461919464fb9" containerID="8f7cb4c1d4399f77a4bee9272b7411e3d08f666e05ff23bad71da9a5b93158e4" exitCode=1 Mar 08 22:01:15.864295 master-0 kubenswrapper[7480]: E0308 22:01:15.864170 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 22:01:15.864987 master-0 kubenswrapper[7480]: E0308 22:01:15.864560 7480 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.029s" Mar 08 22:01:15.864987 master-0 kubenswrapper[7480]: I0308 22:01:15.864611 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerDied","Data":"334ebc87bbf952673cd1b3477f45396aaf813413e807f2bdfa8f48d87bc817d9"} Mar 08 22:01:15.864987 master-0 kubenswrapper[7480]: I0308 22:01:15.864752 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:01:15.864987 master-0 kubenswrapper[7480]: I0308 22:01:15.864784 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa"} Mar 08 22:01:15.866217 master-0 kubenswrapper[7480]: I0308 22:01:15.866153 7480 scope.go:117] "RemoveContainer" containerID="5946b7f2d9d566068ae07c485f39d2cd8eea56a2d551b41eae667da0ce359cfb" Mar 08 22:01:15.878350 master-0 kubenswrapper[7480]: I0308 22:01:15.878271 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 22:01:16.351564 master-0 kubenswrapper[7480]: I0308 22:01:16.351471 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/0.log" Mar 08 22:01:16.635434 master-0 kubenswrapper[7480]: I0308 22:01:16.635332 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:01:16.635434 master-0 kubenswrapper[7480]: I0308 22:01:16.635435 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:01:18.004628 master-0 kubenswrapper[7480]: E0308 22:01:18.004362 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-samples-operator-664cb58b85-mkvtk.189afc92d3c452cc openshift-cluster-samples-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-samples-operator,Name:cluster-samples-operator-664cb58b85-mkvtk,UID:fd9abe2b-f829-4376-9abe-7da0a08770e7,APIVersion:v1,ResourceVersion:8968,FieldPath:spec.containers{cluster-samples-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf\" in 3.79s (3.79s including waiting). Image size: 455416776 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 21:58:30.871995084 +0000 UTC m=+61.325615686,LastTimestamp:2026-03-08 21:58:30.871995084 +0000 UTC m=+61.325615686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:01:18.227356 master-0 kubenswrapper[7480]: E0308 22:01:18.227137 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:01:23.993345 master-0 kubenswrapper[7480]: I0308 22:01:23.993214 7480 patch_prober.go:28] interesting pod/controller-manager-f7df5f5b-txsrq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 22:01:23.993345 master-0 kubenswrapper[7480]: I0308 22:01:23.993311 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 22:01:23.993345 master-0 kubenswrapper[7480]: I0308 22:01:23.993331 7480 patch_prober.go:28] interesting pod/controller-manager-f7df5f5b-txsrq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 22:01:23.994556 master-0 kubenswrapper[7480]: I0308 22:01:23.993438 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 22:01:25.797114 master-0 kubenswrapper[7480]: E0308 22:01:25.797015 7480 projected.go:194] Error preparing data for projected volume kube-api-access-sdfls for pod openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:01:25.798285 master-0 kubenswrapper[7480]: E0308 22:01:25.797175 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls podName:c228b17c-fd7b-4273-ac03-eac5d4a3a4ad nodeName:}" failed. No retries permitted until 2026-03-08 22:01:33.797143192 +0000 UTC m=+244.250763794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sdfls" (UniqueName: "kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls") pod "cluster-storage-operator-6fbfc8dc8f-p68k6" (UID: "c228b17c-fd7b-4273-ac03-eac5d4a3a4ad") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:01:25.899678 master-0 kubenswrapper[7480]: E0308 22:01:25.899552 7480 projected.go:194] Error preparing data for projected volume kube-api-access-zj5rx for pod openshift-marketplace/community-operators-47cmq: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:01:25.899678 master-0 kubenswrapper[7480]: E0308 22:01:25.899692 7480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx podName:89619d97-2c16-4e76-ba80-8b519f6a9366 nodeName:}" failed. No retries permitted until 2026-03-08 22:01:33.899661931 +0000 UTC m=+244.353282563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zj5rx" (UniqueName: "kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx") pod "community-operators-47cmq" (UID: "89619d97-2c16-4e76-ba80-8b519f6a9366") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 08 22:01:26.636175 master-0 kubenswrapper[7480]: I0308 22:01:26.636028 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:01:26.636578 master-0 kubenswrapper[7480]: I0308 22:01:26.636210 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:01:28.089468 master-0 kubenswrapper[7480]: I0308 22:01:28.089353 7480 status_manager.go:851] "Failed to get status for pod" podUID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-5cdb4c5598-xwmmm)" Mar 08 22:01:33.840220 master-0 kubenswrapper[7480]: I0308 22:01:33.840063 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:01:33.941867 master-0 kubenswrapper[7480]: I0308 22:01:33.941748 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:33.993300 master-0 kubenswrapper[7480]: I0308 22:01:33.993208 7480 patch_prober.go:28] interesting pod/controller-manager-f7df5f5b-txsrq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 22:01:33.993300 master-0 kubenswrapper[7480]: I0308 22:01:33.993298 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 22:01:33.993659 master-0 kubenswrapper[7480]: I0308 22:01:33.993204 7480 patch_prober.go:28] interesting pod/controller-manager-f7df5f5b-txsrq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 22:01:33.993659 master-0 kubenswrapper[7480]: I0308 22:01:33.993385 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 22:01:36.636266 master-0 kubenswrapper[7480]: I0308 22:01:36.636185 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:01:36.637175 master-0 kubenswrapper[7480]: I0308 22:01:36.636296 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:01:39.310524 master-0 kubenswrapper[7480]: W0308 22:01:39.310428 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e50eed_e3ac_431f_931b_7c4c848c491b.slice/crio-75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d WatchSource:0}: Error finding container 75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d: Status 404 returned error can't find the container with id 75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d Mar 08 22:01:39.313968 master-0 kubenswrapper[7480]: W0308 22:01:39.313769 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9fe466f_5a23_4f69_8a96_44bd5d6194f5.slice/crio-3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876 WatchSource:0}: Error finding container 3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876: Status 404 returned error can't find the container with id 3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876 Mar 08 22:01:39.316475 master-0 kubenswrapper[7480]: E0308 22:01:39.316433 7480 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="23.452s" Mar 08 22:01:39.316574 master-0 kubenswrapper[7480]: I0308 22:01:39.316491 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:01:39.317718 master-0 kubenswrapper[7480]: I0308 22:01:39.317429 7480 scope.go:117] "RemoveContainer" containerID="c7c62eecaac8f5df8b2da98122fad8c96cfc54251fbf2aa75a9ba067018db826" Mar 08 22:01:39.343752 master-0 kubenswrapper[7480]: I0308 22:01:39.343708 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 22:01:39.358827 master-0 kubenswrapper[7480]: I0308 22:01:39.358664 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:39.358827 master-0 kubenswrapper[7480]: I0308 22:01:39.358768 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerDied","Data":"00c5ed3578644c2cfcf3b05743187fa1a4e66cf46b816a9e956e779028d0b36b"} Mar 08 22:01:39.358827 master-0 kubenswrapper[7480]: I0308 22:01:39.358806 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:01:39.358827 master-0 kubenswrapper[7480]: I0308 22:01:39.358826 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerDied","Data":"6df6f113522fa49700aeaebc115d4f7bc3c6c606f1453723e6b3427085f53838"} Mar 08 22:01:39.358827 master-0 kubenswrapper[7480]: I0308 22:01:39.358853 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.358874 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.358990 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerDied","Data":"73a8f9d32fb6d4973561166a1225ead4683b3110d97d82f0bed60b3b5a68361b"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359016 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359033 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359054 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359264 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359286 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerDied","Data":"33e74f7c7bc9716ac9cd2cfb19a68cc948644c1413dc78e99dffc063fbe5f927"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359309 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359332 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359348 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerDied","Data":"2372290458f059a617f7c34963da0c908f74ff47559433f117b121db9f6a2646"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359372 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerDied","Data":"c086cbd7303ffe955bb2645d06594a1046769c847ec0d61ce7c507a7b2e3ee42"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359426 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359443 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerDied","Data":"939aa1886a91ab1eb51e8a1cf13c57622098c7bede001e5d513bea76546b85fa"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359462 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerDied","Data":"6edcb8198a1dd9b552f9d5577953c53700190a2b87b4307329abfdbc057033f6"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359486 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerDied","Data":"fa11530abd773575590a911f848030e060ab34b160f17f0ed7e7dadcd26f2550"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359511 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c633355a-b323-4458-8ecb-1e490d115f59","Type":"ContainerDied","Data":"28682516e11b7da515d28696337779453c2c96bd4cf9bfd8a8b3aa00aef7307b"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359530 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359554 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerStarted","Data":"5af2147c5b6156b079ec16c643f5bc1c46f463b8da9a0f84030507704a3988c2"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359571 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359590 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0","Type":"ContainerDied","Data":"cd2c2cc51881256bddd6550f01c7b5dafc5dd571e49b29567f752b73ae5dc26c"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359629 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2c2cc51881256bddd6550f01c7b5dafc5dd571e49b29567f752b73ae5dc26c" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359647 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerDied","Data":"a33aa7650397c6fcbc3db8208664515afb6c26ede2b1533a472f078a2d4a0ea4"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359666 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerStarted","Data":"85d980d0ad1f366d812777a55826b75d7182615f3739f55dd1c63103d4d0380c"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359683 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerDied","Data":"975d86808356450f32e152ee3c49e6ab2d8f04281755488f22f0b7506389bb2d"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359703 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerDied","Data":"5946b7f2d9d566068ae07c485f39d2cd8eea56a2d551b41eae667da0ce359cfb"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359723 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerDied","Data":"69b4132a818df716de03fdd12ebf683c551197394c831d762cb2338396e793c4"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359742 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359759 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359775 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359793 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359810 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359831 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerDied","Data":"1a0df161078208a525b4d1fb6d4ca6198700570b496ec5545cc3b9587304d8a5"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359850 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"78dc543f-66ed-4098-b5a9-699ec2ccc856","Type":"ContainerDied","Data":"b72861ea5791b8527c79a3ba9ca252aad4949d7fe333b8f4afa8d681aa68f9d1"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359869 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerDied","Data":"c7c62eecaac8f5df8b2da98122fad8c96cfc54251fbf2aa75a9ba067018db826"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359890 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c633355a-b323-4458-8ecb-1e490d115f59","Type":"ContainerDied","Data":"1d3dcf055543df28f3482d4eda49126cfdf056d4ebfa04ae9c5c2b3c8a2fd988"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359906 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3dcf055543df28f3482d4eda49126cfdf056d4ebfa04ae9c5c2b3c8a2fd988" Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359922 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerStarted","Data":"e72afc2085d471295428d0c6e91b91b2d9a4e2a26d7688d062fbd6d0d26453eb"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359940 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerStarted","Data":"876653e3eaf25a649c1577e2202b14fc9e4231bce10bcb04ae36299b1eb1609e"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359962 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerStarted","Data":"f871c547308cba5a44237c75ff4479c8163cef5b1e2a7ff5964a521c14faec67"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359981 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb"} Mar 08 22:01:39.359945 master-0 kubenswrapper[7480]: I0308 22:01:39.359998 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360017 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerStarted","Data":"9e2fd1210b8809e9723f044551eadfefcc58034be22d2af001446424e236d937"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360035 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"78dc543f-66ed-4098-b5a9-699ec2ccc856","Type":"ContainerDied","Data":"8885706fe3eb5e1a7daf09d862d9ef81922973f55e3d7589baf732cdce1cb547"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360051 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8885706fe3eb5e1a7daf09d862d9ef81922973f55e3d7589baf732cdce1cb547" Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360067 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerStarted","Data":"d653a3f99cf80e74726e1b1340ca117861fb6803c0c0eb0b6d0a40207c074c3a"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360118 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerStarted","Data":"539c0747d69e37b439f9d78ced15438e6d882433e87666140b9b0adafe3b7125"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360136 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"bcc6f26fb91d7fadf6887617bfb463e5c03667a9473c0563f69e191080e03b4a"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360156 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerStarted","Data":"8a52489302a5dc96ab51b546dab29cb1d4fff7df453456bacfb9302f4b296bd5"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360173 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerStarted","Data":"41b89fabe8bcfa93d37c680741df23c997dd23bfef1e93509706508b89ba3e17"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360190 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerDied","Data":"aaa76f728d77c2984e519842ceb28a5273072cbb92bc05bafd70d63dc2b5a869"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360210 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360229 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerDied","Data":"04d2e0520d46f0208b4f81730f6d539f9f11e470a035dc08dbf06867ed1a4e14"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360250 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerDied","Data":"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360272 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerDied","Data":"8f7cb4c1d4399f77a4bee9272b7411e3d08f666e05ff23bad71da9a5b93158e4"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360293 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321"} Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.360909 7480 scope.go:117] "RemoveContainer" containerID="8f7cb4c1d4399f77a4bee9272b7411e3d08f666e05ff23bad71da9a5b93158e4" Mar 08 22:01:39.365249 master-0 kubenswrapper[7480]: I0308 22:01:39.363182 7480 scope.go:117] "RemoveContainer" containerID="1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4" Mar 08 22:01:39.366797 master-0 kubenswrapper[7480]: I0308 22:01:39.366193 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 22:01:39.366797 master-0 kubenswrapper[7480]: I0308 22:01:39.366618 7480 scope.go:117] "RemoveContainer" containerID="6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb" Mar 08 22:01:39.367787 master-0 kubenswrapper[7480]: I0308 22:01:39.367454 7480 scope.go:117] "RemoveContainer" containerID="ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7" Mar 08 22:01:39.368699 master-0 kubenswrapper[7480]: I0308 22:01:39.368252 7480 scope.go:117] "RemoveContainer" containerID="975d86808356450f32e152ee3c49e6ab2d8f04281755488f22f0b7506389bb2d" Mar 08 22:01:39.370162 master-0 kubenswrapper[7480]: I0308 22:01:39.369994 7480 scope.go:117] "RemoveContainer" containerID="aaa76f728d77c2984e519842ceb28a5273072cbb92bc05bafd70d63dc2b5a869" Mar 08 22:01:39.372011 master-0 kubenswrapper[7480]: I0308 22:01:39.370623 7480 scope.go:117] "RemoveContainer" containerID="1a0df161078208a525b4d1fb6d4ca6198700570b496ec5545cc3b9587304d8a5" Mar 08 22:01:39.372011 master-0 kubenswrapper[7480]: I0308 22:01:39.370847 7480 scope.go:117] "RemoveContainer" containerID="04d2e0520d46f0208b4f81730f6d539f9f11e470a035dc08dbf06867ed1a4e14" Mar 08 22:01:39.374470 master-0 kubenswrapper[7480]: I0308 22:01:39.373196 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.379106 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.379122 7480 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="7edf22d9-1339-4983-a438-0654c2e3a105" Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.379169 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.383143 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-fn4ck"] Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.384818 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg"] Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.386618 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 22:01:39.387854 master-0 kubenswrapper[7480]: I0308 22:01:39.386633 7480 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="7edf22d9-1339-4983-a438-0654c2e3a105" Mar 08 22:01:39.392385 master-0 kubenswrapper[7480]: I0308 22:01:39.392334 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 22:01:39.394000 master-0 kubenswrapper[7480]: I0308 22:01:39.393923 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jcrxj" podStartSLOduration=172.185692352 podStartE2EDuration="3m15.393908538s" podCreationTimestamp="2026-03-08 21:58:24 +0000 UTC" firstStartedPulling="2026-03-08 21:58:27.818804111 +0000 UTC m=+58.272424723" lastFinishedPulling="2026-03-08 21:58:51.027020297 +0000 UTC m=+81.480640909" observedRunningTime="2026-03-08 22:01:39.367770032 +0000 UTC m=+249.821390654" watchObservedRunningTime="2026-03-08 22:01:39.393908538 +0000 UTC m=+249.847529150" Mar 08 22:01:39.396294 master-0 kubenswrapper[7480]: I0308 22:01:39.396127 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 22:01:39.431683 master-0 kubenswrapper[7480]: I0308 22:01:39.431628 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 22:01:39.442800 master-0 kubenswrapper[7480]: I0308 22:01:39.442721 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 22:01:39.466999 master-0 kubenswrapper[7480]: I0308 22:01:39.466945 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" podStartSLOduration=173.222889554 podStartE2EDuration="3m14.466924644s" podCreationTimestamp="2026-03-08 21:58:25 +0000 UTC" firstStartedPulling="2026-03-08 21:58:27.465336384 +0000 UTC m=+57.918956986" lastFinishedPulling="2026-03-08 21:58:48.709371464 +0000 UTC m=+79.162992076" observedRunningTime="2026-03-08 22:01:39.466297928 +0000 UTC m=+249.919918550" watchObservedRunningTime="2026-03-08 22:01:39.466924644 +0000 UTC m=+249.920545246" Mar 08 22:01:39.470699 master-0 kubenswrapper[7480]: I0308 22:01:39.470341 7480 scope.go:117] "RemoveContainer" containerID="a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331" Mar 08 22:01:39.538142 master-0 kubenswrapper[7480]: I0308 22:01:39.535283 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" podStartSLOduration=190.744866136 podStartE2EDuration="3m14.53526691s" podCreationTimestamp="2026-03-08 21:58:25 +0000 UTC" firstStartedPulling="2026-03-08 21:58:27.08158373 +0000 UTC m=+57.535204332" lastFinishedPulling="2026-03-08 21:58:30.871984504 +0000 UTC m=+61.325605106" observedRunningTime="2026-03-08 22:01:39.508611712 +0000 UTC m=+249.962232314" watchObservedRunningTime="2026-03-08 22:01:39.53526691 +0000 UTC m=+249.988887512" Mar 08 22:01:39.564663 master-0 kubenswrapper[7480]: I0308 22:01:39.564609 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerStarted","Data":"75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d"} Mar 08 22:01:39.592839 master-0 kubenswrapper[7480]: I0308 22:01:39.592810 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8k5md"] Mar 08 22:01:39.592943 master-0 kubenswrapper[7480]: I0308 22:01:39.592857 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8k5md"] Mar 08 22:01:39.593048 master-0 kubenswrapper[7480]: I0308 22:01:39.592995 7480 scope.go:117] "RemoveContainer" containerID="6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb" Mar 08 22:01:39.593655 master-0 kubenswrapper[7480]: E0308 22:01:39.593628 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331\": container with ID starting with a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331 not found: ID does not exist" containerID="a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331" Mar 08 22:01:39.593746 master-0 kubenswrapper[7480]: E0308 22:01:39.593708 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb\": container with ID starting with 6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb not found: ID does not exist" containerID="6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb" Mar 08 22:01:39.593746 master-0 kubenswrapper[7480]: I0308 22:01:39.593736 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb"} err="failed to get container status \"6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb\": rpc error: code = NotFound desc = could not find container \"6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb\": container with ID starting with 6b2d935675401022d0d3a5a0cba88a9960e98d4a712ee887558cdfed52b47cbb not found: ID does not exist" Mar 08 22:01:39.593837 master-0 kubenswrapper[7480]: I0308 22:01:39.593754 7480 scope.go:117] "RemoveContainer" containerID="a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331" Mar 08 22:01:39.594217 master-0 kubenswrapper[7480]: I0308 22:01:39.594194 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331"} err="failed to get container status \"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331\": rpc error: code = NotFound desc = could not find container \"a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331\": container with ID starting with a38551dfdc41a69ef8701c103d6e1d1e4d82312c574f00743d9049e10be45331 not found: ID does not exist" Mar 08 22:01:39.625027 master-0 kubenswrapper[7480]: I0308 22:01:39.624990 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:01:39.631234 master-0 kubenswrapper[7480]: I0308 22:01:39.629960 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:01:39.631234 master-0 kubenswrapper[7480]: I0308 22:01:39.630064 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:01:39.633947 master-0 kubenswrapper[7480]: I0308 22:01:39.633302 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876"} Mar 08 22:01:39.663435 master-0 kubenswrapper[7480]: I0308 22:01:39.663401 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 08 22:01:39.685574 master-0 kubenswrapper[7480]: I0308 22:01:39.681574 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w7p5f" podStartSLOduration=170.258512661 podStartE2EDuration="3m16.68155307s" podCreationTimestamp="2026-03-08 21:58:23 +0000 UTC" firstStartedPulling="2026-03-08 21:58:24.722391124 +0000 UTC m=+55.176011726" lastFinishedPulling="2026-03-08 21:58:51.145431483 +0000 UTC m=+81.599052135" observedRunningTime="2026-03-08 22:01:39.663809302 +0000 UTC m=+250.117429904" watchObservedRunningTime="2026-03-08 22:01:39.68155307 +0000 UTC m=+250.135173662" Mar 08 22:01:39.733246 master-0 kubenswrapper[7480]: I0308 22:01:39.731903 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=192.73188006 podStartE2EDuration="3m12.73188006s" podCreationTimestamp="2026-03-08 21:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:01:39.707804838 +0000 UTC m=+250.161425440" watchObservedRunningTime="2026-03-08 22:01:39.73188006 +0000 UTC m=+250.185500672" Mar 08 22:01:39.810427 master-0 kubenswrapper[7480]: I0308 22:01:39.810385 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" path="/var/lib/kubelet/pods/18d5d11d-3d01-448f-b34e-55ebc772f5e8/volumes" Mar 08 22:01:39.810935 master-0 kubenswrapper[7480]: I0308 22:01:39.810901 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" path="/var/lib/kubelet/pods/65148321-8caf-4e9c-80cc-ced8e2a8ac03/volumes" Mar 08 22:01:39.811351 master-0 kubenswrapper[7480]: I0308 22:01:39.811330 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9c4d25-8230-4111-b1ad-fd6427c16488" path="/var/lib/kubelet/pods/8a9c4d25-8230-4111-b1ad-fd6427c16488/volumes" Mar 08 22:01:40.084097 master-0 kubenswrapper[7480]: I0308 22:01:40.083996 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=21.083977079 podStartE2EDuration="21.083977079s" podCreationTimestamp="2026-03-08 22:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:01:40.080958941 +0000 UTC m=+250.534579543" watchObservedRunningTime="2026-03-08 22:01:40.083977079 +0000 UTC m=+250.537597681" Mar 08 22:01:40.208648 master-0 kubenswrapper[7480]: I0308 22:01:40.208571 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8w7wm" podStartSLOduration=171.122553736 podStartE2EDuration="3m14.208551778s" podCreationTimestamp="2026-03-08 21:58:26 +0000 UTC" firstStartedPulling="2026-03-08 21:58:27.852160547 +0000 UTC m=+58.305781149" lastFinishedPulling="2026-03-08 21:58:50.938158579 +0000 UTC m=+81.391779191" observedRunningTime="2026-03-08 22:01:40.204594905 +0000 UTC m=+250.658215527" watchObservedRunningTime="2026-03-08 22:01:40.208551778 +0000 UTC m=+250.662172380" Mar 08 22:01:40.232355 master-0 kubenswrapper[7480]: I0308 22:01:40.232262 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" podStartSLOduration=191.04641973 podStartE2EDuration="3m14.23223805s" podCreationTimestamp="2026-03-08 21:58:26 +0000 UTC" firstStartedPulling="2026-03-08 21:58:27.710181718 +0000 UTC m=+58.163802320" lastFinishedPulling="2026-03-08 21:58:30.896000038 +0000 UTC m=+61.349620640" observedRunningTime="2026-03-08 22:01:40.227417715 +0000 UTC m=+250.681038337" watchObservedRunningTime="2026-03-08 22:01:40.23223805 +0000 UTC m=+250.685858652" Mar 08 22:01:40.641259 master-0 kubenswrapper[7480]: I0308 22:01:40.641159 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/0.log" Mar 08 22:01:40.641259 master-0 kubenswrapper[7480]: I0308 22:01:40.641262 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerStarted","Data":"91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2"} Mar 08 22:01:40.646303 master-0 kubenswrapper[7480]: I0308 22:01:40.646251 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/0.log" Mar 08 22:01:40.646458 master-0 kubenswrapper[7480]: I0308 22:01:40.646393 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"2bcf2f4522ec1e98454f0d3a88ae01a27705138b2f5fbbd08bc581f106c16a5d"} Mar 08 22:01:40.654379 master-0 kubenswrapper[7480]: I0308 22:01:40.654319 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-955fcfb87-tn4pc_2c2c4964-678e-46ac-a500-8efc6b8255d9/machine-approver-controller/0.log" Mar 08 22:01:40.655045 master-0 kubenswrapper[7480]: I0308 22:01:40.654949 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerStarted","Data":"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418"} Mar 08 22:01:40.668216 master-0 kubenswrapper[7480]: I0308 22:01:40.668107 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4"} Mar 08 22:01:40.671903 master-0 kubenswrapper[7480]: I0308 22:01:40.671842 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerStarted","Data":"852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e"} Mar 08 22:01:40.680479 master-0 kubenswrapper[7480]: I0308 22:01:40.680293 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:01:40.686221 master-0 kubenswrapper[7480]: I0308 22:01:40.685280 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"48f4e5c75e011ab844af8ce6a62930e7aa5da5ffcb65fe585956c029c491a0cc"} Mar 08 22:01:40.691245 master-0 kubenswrapper[7480]: I0308 22:01:40.691180 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerStarted","Data":"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26"} Mar 08 22:01:40.692008 master-0 kubenswrapper[7480]: I0308 22:01:40.691961 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:01:40.697797 master-0 kubenswrapper[7480]: I0308 22:01:40.697745 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:01:40.698969 master-0 kubenswrapper[7480]: I0308 22:01:40.698849 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5"} Mar 08 22:01:40.709105 master-0 kubenswrapper[7480]: I0308 22:01:40.709029 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/0.log" Mar 08 22:01:40.709263 master-0 kubenswrapper[7480]: I0308 22:01:40.709167 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"799bcb818f10708811e14b095b41eda5205477d4badc6517a720213a0c436a29"} Mar 08 22:01:41.860573 master-0 kubenswrapper[7480]: I0308 22:01:41.860348 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:01:41.863306 master-0 kubenswrapper[7480]: I0308 22:01:41.862679 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:01:42.725960 master-0 kubenswrapper[7480]: I0308 22:01:42.725755 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerStarted","Data":"bd2fcdaa2b69646a1f5d77c5acf0088cc640d06a976607ae2c22145452d4676a"} Mar 08 22:01:42.728718 master-0 kubenswrapper[7480]: I0308 22:01:42.728578 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"d28b9b684de2ee6afb8af986b004969105b39b6920f35f943824b725390ab335"} Mar 08 22:01:42.777856 master-0 kubenswrapper[7480]: I0308 22:01:42.777743 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" podStartSLOduration=193.084986825 podStartE2EDuration="3m15.777717753s" podCreationTimestamp="2026-03-08 21:58:27 +0000 UTC" firstStartedPulling="2026-03-08 22:01:39.710882758 +0000 UTC m=+250.164503360" lastFinishedPulling="2026-03-08 22:01:42.403613686 +0000 UTC m=+252.857234288" observedRunningTime="2026-03-08 22:01:42.772819266 +0000 UTC m=+253.226439908" watchObservedRunningTime="2026-03-08 22:01:42.777717753 +0000 UTC m=+253.231338385" Mar 08 22:01:42.778147 master-0 kubenswrapper[7480]: I0308 22:01:42.777915 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" podStartSLOduration=192.680962037 podStartE2EDuration="3m15.777908248s" podCreationTimestamp="2026-03-08 21:58:27 +0000 UTC" firstStartedPulling="2026-03-08 22:01:39.313320276 +0000 UTC m=+249.766940918" lastFinishedPulling="2026-03-08 22:01:42.410266497 +0000 UTC m=+252.863887129" observedRunningTime="2026-03-08 22:01:42.75243463 +0000 UTC m=+253.206055292" watchObservedRunningTime="2026-03-08 22:01:42.777908248 +0000 UTC m=+253.231528890" Mar 08 22:01:43.049682 master-0 kubenswrapper[7480]: I0308 22:01:43.049412 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:43.388282 master-0 kubenswrapper[7480]: I0308 22:01:43.388196 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:43.402561 master-0 kubenswrapper[7480]: I0308 22:01:43.402034 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:01:43.543527 master-0 kubenswrapper[7480]: I0308 22:01:43.543471 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lmwn6" Mar 08 22:01:43.543807 master-0 kubenswrapper[7480]: I0308 22:01:43.543545 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c5hcb" Mar 08 22:01:43.552694 master-0 kubenswrapper[7480]: I0308 22:01:43.552631 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:43.552694 master-0 kubenswrapper[7480]: I0308 22:01:43.552688 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:01:44.007127 master-0 kubenswrapper[7480]: I0308 22:01:44.007037 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6"] Mar 08 22:01:44.075345 master-0 kubenswrapper[7480]: I0308 22:01:44.075274 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47cmq"] Mar 08 22:01:44.083716 master-0 kubenswrapper[7480]: W0308 22:01:44.083647 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89619d97_2c16_4e76_ba80_8b519f6a9366.slice/crio-44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7 WatchSource:0}: Error finding container 44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7: Status 404 returned error can't find the container with id 44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7 Mar 08 22:01:44.748103 master-0 kubenswrapper[7480]: I0308 22:01:44.748026 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerStarted","Data":"a5f486dd57f083148217b384b5e4b7e4ee2cd439fe07291b198c3cd32fbe85ef"} Mar 08 22:01:44.750421 master-0 kubenswrapper[7480]: I0308 22:01:44.750383 7480 generic.go:334] "Generic (PLEG): container finished" podID="89619d97-2c16-4e76-ba80-8b519f6a9366" containerID="b4991335150a6ed2fd7eec9480c2030f976e4351bd9e24d23f766eaa04158aae" exitCode=0 Mar 08 22:01:44.750545 master-0 kubenswrapper[7480]: I0308 22:01:44.750446 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerDied","Data":"b4991335150a6ed2fd7eec9480c2030f976e4351bd9e24d23f766eaa04158aae"} Mar 08 22:01:44.750660 master-0 kubenswrapper[7480]: I0308 22:01:44.750636 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerStarted","Data":"44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7"} Mar 08 22:01:45.083807 master-0 kubenswrapper[7480]: I0308 22:01:45.083395 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrxj"] Mar 08 22:01:45.083807 master-0 kubenswrapper[7480]: I0308 22:01:45.083696 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jcrxj" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="registry-server" containerID="cri-o://f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" gracePeriod=2 Mar 08 22:01:45.095582 master-0 kubenswrapper[7480]: I0308 22:01:45.095535 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7p5f"] Mar 08 22:01:45.096011 master-0 kubenswrapper[7480]: I0308 22:01:45.095985 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w7p5f" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="registry-server" containerID="cri-o://a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4" gracePeriod=2 Mar 08 22:01:45.144213 master-0 kubenswrapper[7480]: I0308 22:01:45.144166 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mg95b"] Mar 08 22:01:45.144532 master-0 kubenswrapper[7480]: E0308 22:01:45.144515 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerName="installer" Mar 08 22:01:45.144601 master-0 kubenswrapper[7480]: I0308 22:01:45.144591 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerName="installer" Mar 08 22:01:45.144664 master-0 kubenswrapper[7480]: E0308 22:01:45.144654 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerName="extract-content" Mar 08 22:01:45.144718 master-0 kubenswrapper[7480]: I0308 22:01:45.144708 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerName="extract-content" Mar 08 22:01:45.144834 master-0 kubenswrapper[7480]: E0308 22:01:45.144797 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c633355a-b323-4458-8ecb-1e490d115f59" containerName="installer" Mar 08 22:01:45.144941 master-0 kubenswrapper[7480]: I0308 22:01:45.144924 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="c633355a-b323-4458-8ecb-1e490d115f59" containerName="installer" Mar 08 22:01:45.145015 master-0 kubenswrapper[7480]: E0308 22:01:45.145004 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerName="installer" Mar 08 22:01:45.145087 master-0 kubenswrapper[7480]: I0308 22:01:45.145060 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerName="installer" Mar 08 22:01:45.145158 master-0 kubenswrapper[7480]: E0308 22:01:45.145148 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" containerName="installer" Mar 08 22:01:45.145213 master-0 kubenswrapper[7480]: I0308 22:01:45.145204 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" containerName="installer" Mar 08 22:01:45.145272 master-0 kubenswrapper[7480]: E0308 22:01:45.145262 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9c4d25-8230-4111-b1ad-fd6427c16488" containerName="installer" Mar 08 22:01:45.145329 master-0 kubenswrapper[7480]: I0308 22:01:45.145320 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9c4d25-8230-4111-b1ad-fd6427c16488" containerName="installer" Mar 08 22:01:45.145386 master-0 kubenswrapper[7480]: E0308 22:01:45.145376 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerName="installer" Mar 08 22:01:45.145439 master-0 kubenswrapper[7480]: I0308 22:01:45.145430 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerName="installer" Mar 08 22:01:45.145579 master-0 kubenswrapper[7480]: E0308 22:01:45.145568 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerName="extract-utilities" Mar 08 22:01:45.145634 master-0 kubenswrapper[7480]: I0308 22:01:45.145625 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerName="extract-utilities" Mar 08 22:01:45.145778 master-0 kubenswrapper[7480]: I0308 22:01:45.145767 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9c4d25-8230-4111-b1ad-fd6427c16488" containerName="installer" Mar 08 22:01:45.145873 master-0 kubenswrapper[7480]: I0308 22:01:45.145858 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerName="installer" Mar 08 22:01:45.145947 master-0 kubenswrapper[7480]: I0308 22:01:45.145937 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="65148321-8caf-4e9c-80cc-ced8e2a8ac03" containerName="installer" Mar 08 22:01:45.146016 master-0 kubenswrapper[7480]: I0308 22:01:45.145998 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="c633355a-b323-4458-8ecb-1e490d115f59" containerName="installer" Mar 08 22:01:45.146150 master-0 kubenswrapper[7480]: I0308 22:01:45.146136 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerName="installer" Mar 08 22:01:45.146221 master-0 kubenswrapper[7480]: I0308 22:01:45.146212 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d5d11d-3d01-448f-b34e-55ebc772f5e8" containerName="extract-content" Mar 08 22:01:45.146277 master-0 kubenswrapper[7480]: I0308 22:01:45.146268 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerName="installer" Mar 08 22:01:45.147171 master-0 kubenswrapper[7480]: I0308 22:01:45.147151 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.149818 master-0 kubenswrapper[7480]: I0308 22:01:45.149762 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8ctpt"] Mar 08 22:01:45.154561 master-0 kubenswrapper[7480]: I0308 22:01:45.153992 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.157298 master-0 kubenswrapper[7480]: I0308 22:01:45.156292 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mqlfp" Mar 08 22:01:45.193440 master-0 kubenswrapper[7480]: I0308 22:01:45.193374 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg95b"] Mar 08 22:01:45.193440 master-0 kubenswrapper[7480]: E0308 22:01:45.193399 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208 is running failed: container process not found" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 22:01:45.193988 master-0 kubenswrapper[7480]: E0308 22:01:45.193822 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208 is running failed: container process not found" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 22:01:45.194134 master-0 kubenswrapper[7480]: E0308 22:01:45.194047 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208 is running failed: container process not found" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 22:01:45.194177 master-0 kubenswrapper[7480]: E0308 22:01:45.194120 7480 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-jcrxj" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="registry-server" Mar 08 22:01:45.196107 master-0 kubenswrapper[7480]: I0308 22:01:45.196059 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ctpt"] Mar 08 22:01:45.243603 master-0 kubenswrapper[7480]: I0308 22:01:45.243530 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67bc\" (UniqueName: \"kubernetes.io/projected/4eec590b-c536-4b16-a664-81bc3c74eef5-kube-api-access-k67bc\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.243852 master-0 kubenswrapper[7480]: I0308 22:01:45.243643 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-utilities\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.243852 master-0 kubenswrapper[7480]: I0308 22:01:45.243683 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-catalog-content\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.345498 master-0 kubenswrapper[7480]: I0308 22:01:45.345256 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-utilities\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.345498 master-0 kubenswrapper[7480]: I0308 22:01:45.345319 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67bc\" (UniqueName: \"kubernetes.io/projected/4eec590b-c536-4b16-a664-81bc3c74eef5-kube-api-access-k67bc\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.345498 master-0 kubenswrapper[7480]: I0308 22:01:45.345355 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-utilities\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.345498 master-0 kubenswrapper[7480]: I0308 22:01:45.345377 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-catalog-content\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.345498 master-0 kubenswrapper[7480]: I0308 22:01:45.345399 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2lsl\" (UniqueName: \"kubernetes.io/projected/b1207b6b-0517-46eb-9953-737f2bf1040d-kube-api-access-d2lsl\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.345498 master-0 kubenswrapper[7480]: I0308 22:01:45.345417 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-catalog-content\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.348882 master-0 kubenswrapper[7480]: I0308 22:01:45.346120 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-utilities\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.348882 master-0 kubenswrapper[7480]: I0308 22:01:45.346502 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-catalog-content\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.369128 master-0 kubenswrapper[7480]: I0308 22:01:45.369086 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67bc\" (UniqueName: \"kubernetes.io/projected/4eec590b-c536-4b16-a664-81bc3c74eef5-kube-api-access-k67bc\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.440668 master-0 kubenswrapper[7480]: I0308 22:01:45.440600 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 22:01:45.447719 master-0 kubenswrapper[7480]: I0308 22:01:45.447556 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2lsl\" (UniqueName: \"kubernetes.io/projected/b1207b6b-0517-46eb-9953-737f2bf1040d-kube-api-access-d2lsl\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.447719 master-0 kubenswrapper[7480]: I0308 22:01:45.447604 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-catalog-content\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.447719 master-0 kubenswrapper[7480]: I0308 22:01:45.447674 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-utilities\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.448448 master-0 kubenswrapper[7480]: I0308 22:01:45.448394 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-catalog-content\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.448448 master-0 kubenswrapper[7480]: I0308 22:01:45.448435 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-utilities\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.490958 master-0 kubenswrapper[7480]: I0308 22:01:45.490337 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2lsl\" (UniqueName: \"kubernetes.io/projected/b1207b6b-0517-46eb-9953-737f2bf1040d-kube-api-access-d2lsl\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.549620 master-0 kubenswrapper[7480]: I0308 22:01:45.549535 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c8wr\" (UniqueName: \"kubernetes.io/projected/5857b3d0-0865-4ffd-bcc9-3c139c575209-kube-api-access-7c8wr\") pod \"5857b3d0-0865-4ffd-bcc9-3c139c575209\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " Mar 08 22:01:45.549620 master-0 kubenswrapper[7480]: I0308 22:01:45.549618 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-utilities\") pod \"5857b3d0-0865-4ffd-bcc9-3c139c575209\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " Mar 08 22:01:45.549989 master-0 kubenswrapper[7480]: I0308 22:01:45.549642 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-catalog-content\") pod \"5857b3d0-0865-4ffd-bcc9-3c139c575209\" (UID: \"5857b3d0-0865-4ffd-bcc9-3c139c575209\") " Mar 08 22:01:45.550660 master-0 kubenswrapper[7480]: I0308 22:01:45.550585 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-utilities" (OuterVolumeSpecName: "utilities") pod "5857b3d0-0865-4ffd-bcc9-3c139c575209" (UID: "5857b3d0-0865-4ffd-bcc9-3c139c575209"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:01:45.552386 master-0 kubenswrapper[7480]: I0308 22:01:45.552329 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5857b3d0-0865-4ffd-bcc9-3c139c575209-kube-api-access-7c8wr" (OuterVolumeSpecName: "kube-api-access-7c8wr") pod "5857b3d0-0865-4ffd-bcc9-3c139c575209" (UID: "5857b3d0-0865-4ffd-bcc9-3c139c575209"). InnerVolumeSpecName "kube-api-access-7c8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:01:45.581492 master-0 kubenswrapper[7480]: I0308 22:01:45.580430 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:45.595572 master-0 kubenswrapper[7480]: I0308 22:01:45.595469 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:45.614686 master-0 kubenswrapper[7480]: I0308 22:01:45.614447 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5857b3d0-0865-4ffd-bcc9-3c139c575209" (UID: "5857b3d0-0865-4ffd-bcc9-3c139c575209"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:01:45.641370 master-0 kubenswrapper[7480]: I0308 22:01:45.639667 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 22:01:45.657863 master-0 kubenswrapper[7480]: I0308 22:01:45.655452 7480 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 22:01:45.657863 master-0 kubenswrapper[7480]: I0308 22:01:45.655490 7480 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5857b3d0-0865-4ffd-bcc9-3c139c575209-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 22:01:45.657863 master-0 kubenswrapper[7480]: I0308 22:01:45.655504 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c8wr\" (UniqueName: \"kubernetes.io/projected/5857b3d0-0865-4ffd-bcc9-3c139c575209-kube-api-access-7c8wr\") on node \"master-0\" DevicePath \"\"" Mar 08 22:01:45.756823 master-0 kubenswrapper[7480]: I0308 22:01:45.756768 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-catalog-content\") pod \"74d0aed3-8d57-472f-a48a-14ac41d6575f\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " Mar 08 22:01:45.757048 master-0 kubenswrapper[7480]: I0308 22:01:45.756887 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfmhq\" (UniqueName: \"kubernetes.io/projected/74d0aed3-8d57-472f-a48a-14ac41d6575f-kube-api-access-mfmhq\") pod \"74d0aed3-8d57-472f-a48a-14ac41d6575f\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " Mar 08 22:01:45.757048 master-0 kubenswrapper[7480]: I0308 22:01:45.756993 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-utilities\") pod \"74d0aed3-8d57-472f-a48a-14ac41d6575f\" (UID: \"74d0aed3-8d57-472f-a48a-14ac41d6575f\") " Mar 08 22:01:45.758104 master-0 kubenswrapper[7480]: I0308 22:01:45.758050 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-utilities" (OuterVolumeSpecName: "utilities") pod "74d0aed3-8d57-472f-a48a-14ac41d6575f" (UID: "74d0aed3-8d57-472f-a48a-14ac41d6575f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:01:45.771420 master-0 kubenswrapper[7480]: I0308 22:01:45.771314 7480 generic.go:334] "Generic (PLEG): container finished" podID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerID="a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4" exitCode=0 Mar 08 22:01:45.771946 master-0 kubenswrapper[7480]: I0308 22:01:45.771467 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7p5f" event={"ID":"5857b3d0-0865-4ffd-bcc9-3c139c575209","Type":"ContainerDied","Data":"a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4"} Mar 08 22:01:45.771946 master-0 kubenswrapper[7480]: I0308 22:01:45.771539 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7p5f" event={"ID":"5857b3d0-0865-4ffd-bcc9-3c139c575209","Type":"ContainerDied","Data":"f0898c70bd4821b7587072ceaf944ff8498ad8e0f03772b1b705ce882893b76c"} Mar 08 22:01:45.771946 master-0 kubenswrapper[7480]: I0308 22:01:45.771588 7480 scope.go:117] "RemoveContainer" containerID="a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4" Mar 08 22:01:45.771946 master-0 kubenswrapper[7480]: I0308 22:01:45.771914 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7p5f" Mar 08 22:01:45.775920 master-0 kubenswrapper[7480]: I0308 22:01:45.775879 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d0aed3-8d57-472f-a48a-14ac41d6575f-kube-api-access-mfmhq" (OuterVolumeSpecName: "kube-api-access-mfmhq") pod "74d0aed3-8d57-472f-a48a-14ac41d6575f" (UID: "74d0aed3-8d57-472f-a48a-14ac41d6575f"). InnerVolumeSpecName "kube-api-access-mfmhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:01:45.778119 master-0 kubenswrapper[7480]: I0308 22:01:45.777435 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerStarted","Data":"45472acd22cf9f28bd94833449b2d75f0a3377af69685e85fac8637f3aa96e29"} Mar 08 22:01:45.785885 master-0 kubenswrapper[7480]: I0308 22:01:45.785301 7480 generic.go:334] "Generic (PLEG): container finished" podID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" exitCode=0 Mar 08 22:01:45.785885 master-0 kubenswrapper[7480]: I0308 22:01:45.785404 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrxj" Mar 08 22:01:45.795666 master-0 kubenswrapper[7480]: I0308 22:01:45.795392 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrxj" event={"ID":"74d0aed3-8d57-472f-a48a-14ac41d6575f","Type":"ContainerDied","Data":"f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208"} Mar 08 22:01:45.795666 master-0 kubenswrapper[7480]: I0308 22:01:45.795476 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrxj" event={"ID":"74d0aed3-8d57-472f-a48a-14ac41d6575f","Type":"ContainerDied","Data":"dbd0502e9633a163b882da4e059fc58d1cb8c50d2d7c3ae85f65ae7cfc636b5a"} Mar 08 22:01:45.799857 master-0 kubenswrapper[7480]: I0308 22:01:45.799743 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg95b"] Mar 08 22:01:45.803827 master-0 kubenswrapper[7480]: I0308 22:01:45.803753 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74d0aed3-8d57-472f-a48a-14ac41d6575f" (UID: "74d0aed3-8d57-472f-a48a-14ac41d6575f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:01:45.858301 master-0 kubenswrapper[7480]: I0308 22:01:45.858180 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfmhq\" (UniqueName: \"kubernetes.io/projected/74d0aed3-8d57-472f-a48a-14ac41d6575f-kube-api-access-mfmhq\") on node \"master-0\" DevicePath \"\"" Mar 08 22:01:45.858301 master-0 kubenswrapper[7480]: I0308 22:01:45.858219 7480 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 22:01:45.858301 master-0 kubenswrapper[7480]: I0308 22:01:45.858230 7480 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74d0aed3-8d57-472f-a48a-14ac41d6575f-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 22:01:45.903276 master-0 kubenswrapper[7480]: I0308 22:01:45.903230 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7p5f"] Mar 08 22:01:45.905263 master-0 kubenswrapper[7480]: I0308 22:01:45.905223 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w7p5f"] Mar 08 22:01:46.073485 master-0 kubenswrapper[7480]: I0308 22:01:46.073429 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ctpt"] Mar 08 22:01:46.122029 master-0 kubenswrapper[7480]: I0308 22:01:46.121844 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrxj"] Mar 08 22:01:46.127346 master-0 kubenswrapper[7480]: I0308 22:01:46.127303 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrxj"] Mar 08 22:01:46.423211 master-0 kubenswrapper[7480]: W0308 22:01:46.422636 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eec590b_c536_4b16_a664_81bc3c74eef5.slice/crio-f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968 WatchSource:0}: Error finding container f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968: Status 404 returned error can't find the container with id f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968 Mar 08 22:01:46.423498 master-0 kubenswrapper[7480]: W0308 22:01:46.423448 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1207b6b_0517_46eb_9953_737f2bf1040d.slice/crio-3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81 WatchSource:0}: Error finding container 3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81: Status 404 returned error can't find the container with id 3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81 Mar 08 22:01:46.434290 master-0 kubenswrapper[7480]: I0308 22:01:46.434165 7480 scope.go:117] "RemoveContainer" containerID="4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7" Mar 08 22:01:46.475503 master-0 kubenswrapper[7480]: I0308 22:01:46.475449 7480 scope.go:117] "RemoveContainer" containerID="f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875" Mar 08 22:01:46.493325 master-0 kubenswrapper[7480]: I0308 22:01:46.493279 7480 scope.go:117] "RemoveContainer" containerID="a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4" Mar 08 22:01:46.493799 master-0 kubenswrapper[7480]: E0308 22:01:46.493762 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4\": container with ID starting with a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4 not found: ID does not exist" containerID="a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4" Mar 08 22:01:46.493845 master-0 kubenswrapper[7480]: I0308 22:01:46.493814 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4"} err="failed to get container status \"a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4\": rpc error: code = NotFound desc = could not find container \"a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4\": container with ID starting with a11dab37eb99c92f50fc6c3697f2813ade902caf47e720efac87404523695ca4 not found: ID does not exist" Mar 08 22:01:46.493973 master-0 kubenswrapper[7480]: I0308 22:01:46.493850 7480 scope.go:117] "RemoveContainer" containerID="4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7" Mar 08 22:01:46.494302 master-0 kubenswrapper[7480]: E0308 22:01:46.494272 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7\": container with ID starting with 4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7 not found: ID does not exist" containerID="4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7" Mar 08 22:01:46.494349 master-0 kubenswrapper[7480]: I0308 22:01:46.494297 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7"} err="failed to get container status \"4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7\": rpc error: code = NotFound desc = could not find container \"4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7\": container with ID starting with 4b903d630ebf81f5dffddee24eda4248e8e68dec99aefe9ac560a5898194ece7 not found: ID does not exist" Mar 08 22:01:46.494349 master-0 kubenswrapper[7480]: I0308 22:01:46.494313 7480 scope.go:117] "RemoveContainer" containerID="f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875" Mar 08 22:01:46.494638 master-0 kubenswrapper[7480]: E0308 22:01:46.494613 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875\": container with ID starting with f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875 not found: ID does not exist" containerID="f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875" Mar 08 22:01:46.494689 master-0 kubenswrapper[7480]: I0308 22:01:46.494632 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875"} err="failed to get container status \"f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875\": rpc error: code = NotFound desc = could not find container \"f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875\": container with ID starting with f8bc6fe2b253e5b908b6548f58ddfca4458e0821748497845d3b3791b01ac875 not found: ID does not exist" Mar 08 22:01:46.494689 master-0 kubenswrapper[7480]: I0308 22:01:46.494647 7480 scope.go:117] "RemoveContainer" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" Mar 08 22:01:46.534127 master-0 kubenswrapper[7480]: I0308 22:01:46.534065 7480 scope.go:117] "RemoveContainer" containerID="2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256" Mar 08 22:01:46.635158 master-0 kubenswrapper[7480]: I0308 22:01:46.632559 7480 scope.go:117] "RemoveContainer" containerID="6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c" Mar 08 22:01:46.676478 master-0 kubenswrapper[7480]: I0308 22:01:46.675753 7480 scope.go:117] "RemoveContainer" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" Mar 08 22:01:46.677369 master-0 kubenswrapper[7480]: E0308 22:01:46.677335 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208\": container with ID starting with f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208 not found: ID does not exist" containerID="f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208" Mar 08 22:01:46.677435 master-0 kubenswrapper[7480]: I0308 22:01:46.677370 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208"} err="failed to get container status \"f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208\": rpc error: code = NotFound desc = could not find container \"f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208\": container with ID starting with f2c2d966fd4f6b730c5c219486608abd030c709bf059312e195d57c0e217c208 not found: ID does not exist" Mar 08 22:01:46.677435 master-0 kubenswrapper[7480]: I0308 22:01:46.677400 7480 scope.go:117] "RemoveContainer" containerID="2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256" Mar 08 22:01:46.677897 master-0 kubenswrapper[7480]: E0308 22:01:46.677869 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256\": container with ID starting with 2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256 not found: ID does not exist" containerID="2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256" Mar 08 22:01:46.677897 master-0 kubenswrapper[7480]: I0308 22:01:46.677889 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256"} err="failed to get container status \"2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256\": rpc error: code = NotFound desc = could not find container \"2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256\": container with ID starting with 2f1e10a332985e99999de49aecb019410671810760e55209e6a0bc38dd85a256 not found: ID does not exist" Mar 08 22:01:46.677978 master-0 kubenswrapper[7480]: I0308 22:01:46.677903 7480 scope.go:117] "RemoveContainer" containerID="6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c" Mar 08 22:01:46.678267 master-0 kubenswrapper[7480]: E0308 22:01:46.678234 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c\": container with ID starting with 6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c not found: ID does not exist" containerID="6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c" Mar 08 22:01:46.678306 master-0 kubenswrapper[7480]: I0308 22:01:46.678264 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c"} err="failed to get container status \"6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c\": rpc error: code = NotFound desc = could not find container \"6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c\": container with ID starting with 6eef060b6b4951a91ab0717958f071e01f90849baa75aac0d8ce23ea70ec907c not found: ID does not exist" Mar 08 22:01:46.796299 master-0 kubenswrapper[7480]: I0308 22:01:46.796222 7480 generic.go:334] "Generic (PLEG): container finished" podID="b1207b6b-0517-46eb-9953-737f2bf1040d" containerID="da72619d44af489aac6baf5a28a18d7d685dca71b43deb1db98d79497a18fa19" exitCode=0 Mar 08 22:01:46.796588 master-0 kubenswrapper[7480]: I0308 22:01:46.796538 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerDied","Data":"da72619d44af489aac6baf5a28a18d7d685dca71b43deb1db98d79497a18fa19"} Mar 08 22:01:46.796653 master-0 kubenswrapper[7480]: I0308 22:01:46.796597 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerStarted","Data":"3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81"} Mar 08 22:01:46.799139 master-0 kubenswrapper[7480]: I0308 22:01:46.798966 7480 generic.go:334] "Generic (PLEG): container finished" podID="4eec590b-c536-4b16-a664-81bc3c74eef5" containerID="4562b61799ee566a79cea44db886dae16855feb38419004f25ad733f55567059" exitCode=0 Mar 08 22:01:46.799139 master-0 kubenswrapper[7480]: I0308 22:01:46.799041 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerDied","Data":"4562b61799ee566a79cea44db886dae16855feb38419004f25ad733f55567059"} Mar 08 22:01:46.799139 master-0 kubenswrapper[7480]: I0308 22:01:46.799089 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerStarted","Data":"f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968"} Mar 08 22:01:46.805420 master-0 kubenswrapper[7480]: I0308 22:01:46.805318 7480 generic.go:334] "Generic (PLEG): container finished" podID="89619d97-2c16-4e76-ba80-8b519f6a9366" containerID="45472acd22cf9f28bd94833449b2d75f0a3377af69685e85fac8637f3aa96e29" exitCode=0 Mar 08 22:01:46.805576 master-0 kubenswrapper[7480]: I0308 22:01:46.805408 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerDied","Data":"45472acd22cf9f28bd94833449b2d75f0a3377af69685e85fac8637f3aa96e29"} Mar 08 22:01:47.803129 master-0 kubenswrapper[7480]: I0308 22:01:47.802989 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" path="/var/lib/kubelet/pods/5857b3d0-0865-4ffd-bcc9-3c139c575209/volumes" Mar 08 22:01:47.804973 master-0 kubenswrapper[7480]: I0308 22:01:47.804952 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" path="/var/lib/kubelet/pods/74d0aed3-8d57-472f-a48a-14ac41d6575f/volumes" Mar 08 22:01:47.817761 master-0 kubenswrapper[7480]: I0308 22:01:47.817690 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerStarted","Data":"8216fde810a532dbe5b20008442fb45b7d08d72c9153e2e3074fd8899261a6e8"} Mar 08 22:01:47.820425 master-0 kubenswrapper[7480]: I0308 22:01:47.820393 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerStarted","Data":"9d57fc4d1e08b9fa4f826dec76d98ab4964d370b21a4f1f3de9ac2217b28ef10"} Mar 08 22:01:47.823998 master-0 kubenswrapper[7480]: I0308 22:01:47.823943 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerStarted","Data":"d9ffb5341e8b8d84c9e35bd2c9065a3beacd71fe2f5c3020b9ea1e20dc28e517"} Mar 08 22:01:47.829012 master-0 kubenswrapper[7480]: I0308 22:01:47.828961 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerStarted","Data":"cf1d608cd8e4a27484068f303828c57cd8c70b10159e81ee0191eb215e9cb4eb"} Mar 08 22:01:47.851680 master-0 kubenswrapper[7480]: I0308 22:01:47.851456 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47cmq" podStartSLOduration=198.373942028 podStartE2EDuration="3m20.851435086s" podCreationTimestamp="2026-03-08 21:58:27 +0000 UTC" firstStartedPulling="2026-03-08 22:01:44.752326946 +0000 UTC m=+255.205947568" lastFinishedPulling="2026-03-08 22:01:47.229820014 +0000 UTC m=+257.683440626" observedRunningTime="2026-03-08 22:01:47.849748822 +0000 UTC m=+258.303369424" watchObservedRunningTime="2026-03-08 22:01:47.851435086 +0000 UTC m=+258.305055688" Mar 08 22:01:47.939023 master-0 kubenswrapper[7480]: I0308 22:01:47.938806 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" podStartSLOduration=198.456254676 podStartE2EDuration="3m20.938778203s" podCreationTimestamp="2026-03-08 21:58:27 +0000 UTC" firstStartedPulling="2026-03-08 22:01:44.017654592 +0000 UTC m=+254.471275224" lastFinishedPulling="2026-03-08 22:01:46.500178159 +0000 UTC m=+256.953798751" observedRunningTime="2026-03-08 22:01:47.933584228 +0000 UTC m=+258.387204860" watchObservedRunningTime="2026-03-08 22:01:47.938778203 +0000 UTC m=+258.392398845" Mar 08 22:01:48.840568 master-0 kubenswrapper[7480]: I0308 22:01:48.840501 7480 generic.go:334] "Generic (PLEG): container finished" podID="b1207b6b-0517-46eb-9953-737f2bf1040d" containerID="d9ffb5341e8b8d84c9e35bd2c9065a3beacd71fe2f5c3020b9ea1e20dc28e517" exitCode=0 Mar 08 22:01:48.841730 master-0 kubenswrapper[7480]: I0308 22:01:48.840600 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerDied","Data":"d9ffb5341e8b8d84c9e35bd2c9065a3beacd71fe2f5c3020b9ea1e20dc28e517"} Mar 08 22:01:48.845004 master-0 kubenswrapper[7480]: I0308 22:01:48.844909 7480 generic.go:334] "Generic (PLEG): container finished" podID="4eec590b-c536-4b16-a664-81bc3c74eef5" containerID="cf1d608cd8e4a27484068f303828c57cd8c70b10159e81ee0191eb215e9cb4eb" exitCode=0 Mar 08 22:01:48.845373 master-0 kubenswrapper[7480]: I0308 22:01:48.845225 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerDied","Data":"cf1d608cd8e4a27484068f303828c57cd8c70b10159e81ee0191eb215e9cb4eb"} Mar 08 22:01:49.031145 master-0 kubenswrapper[7480]: I0308 22:01:49.031019 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:49.038856 master-0 kubenswrapper[7480]: I0308 22:01:49.038791 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:49.862934 master-0 kubenswrapper[7480]: I0308 22:01:49.862863 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerStarted","Data":"e95b6b2af3d8666d9ed99fb1c58eb920d15415a3e67c3b59c97608b0cd789d62"} Mar 08 22:01:49.869324 master-0 kubenswrapper[7480]: I0308 22:01:49.869234 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerStarted","Data":"4ef317f319328b940bdd7b199470ed552b6c6819f550cb5e444b775b8545e6b6"} Mar 08 22:01:49.875734 master-0 kubenswrapper[7480]: I0308 22:01:49.875671 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:01:49.893379 master-0 kubenswrapper[7480]: I0308 22:01:49.893268 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8ctpt" podStartSLOduration=2.432691156 podStartE2EDuration="4.893224145s" podCreationTimestamp="2026-03-08 22:01:45 +0000 UTC" firstStartedPulling="2026-03-08 22:01:46.798512809 +0000 UTC m=+257.252133421" lastFinishedPulling="2026-03-08 22:01:49.259045778 +0000 UTC m=+259.712666410" observedRunningTime="2026-03-08 22:01:49.890651979 +0000 UTC m=+260.344272581" watchObservedRunningTime="2026-03-08 22:01:49.893224145 +0000 UTC m=+260.346844747" Mar 08 22:01:49.947177 master-0 kubenswrapper[7480]: I0308 22:01:49.947041 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mg95b" podStartSLOduration=2.472805513 podStartE2EDuration="4.947013415s" podCreationTimestamp="2026-03-08 22:01:45 +0000 UTC" firstStartedPulling="2026-03-08 22:01:46.801365803 +0000 UTC m=+257.254986435" lastFinishedPulling="2026-03-08 22:01:49.275573725 +0000 UTC m=+259.729194337" observedRunningTime="2026-03-08 22:01:49.918896688 +0000 UTC m=+260.372517330" watchObservedRunningTime="2026-03-08 22:01:49.947013415 +0000 UTC m=+260.400634057" Mar 08 22:01:53.553791 master-0 kubenswrapper[7480]: I0308 22:01:53.553685 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:53.554973 master-0 kubenswrapper[7480]: I0308 22:01:53.553812 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:53.612978 master-0 kubenswrapper[7480]: I0308 22:01:53.612884 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:53.956609 master-0 kubenswrapper[7480]: I0308 22:01:53.956536 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:01:55.582010 master-0 kubenswrapper[7480]: I0308 22:01:55.581307 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:55.582010 master-0 kubenswrapper[7480]: I0308 22:01:55.581385 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:55.596486 master-0 kubenswrapper[7480]: I0308 22:01:55.596420 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:55.596608 master-0 kubenswrapper[7480]: I0308 22:01:55.596497 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:55.648218 master-0 kubenswrapper[7480]: I0308 22:01:55.648171 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:55.648435 master-0 kubenswrapper[7480]: I0308 22:01:55.648271 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:01:55.968065 master-0 kubenswrapper[7480]: I0308 22:01:55.967900 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:01:55.977823 master-0 kubenswrapper[7480]: I0308 22:01:55.977779 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:02:01.602950 master-0 kubenswrapper[7480]: I0308 22:02:01.602898 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc"] Mar 08 22:02:01.603581 master-0 kubenswrapper[7480]: I0308 22:02:01.603168 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" containerID="cri-o://b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418" gracePeriod=30 Mar 08 22:02:01.603581 master-0 kubenswrapper[7480]: I0308 22:02:01.603307 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="kube-rbac-proxy" containerID="cri-o://e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce" gracePeriod=30 Mar 08 22:02:01.648207 master-0 kubenswrapper[7480]: I0308 22:02:01.648144 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf"] Mar 08 22:02:01.648456 master-0 kubenswrapper[7480]: E0308 22:02:01.648447 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="registry-server" Mar 08 22:02:01.648491 master-0 kubenswrapper[7480]: I0308 22:02:01.648462 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="registry-server" Mar 08 22:02:01.648491 master-0 kubenswrapper[7480]: E0308 22:02:01.648485 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="extract-utilities" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: I0308 22:02:01.648494 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="extract-utilities" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: E0308 22:02:01.648506 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="extract-content" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: I0308 22:02:01.648513 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="extract-content" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: E0308 22:02:01.648528 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="extract-utilities" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: I0308 22:02:01.648534 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="extract-utilities" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: E0308 22:02:01.648543 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="extract-content" Mar 08 22:02:01.648547 master-0 kubenswrapper[7480]: I0308 22:02:01.648549 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="extract-content" Mar 08 22:02:01.648727 master-0 kubenswrapper[7480]: E0308 22:02:01.648555 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="registry-server" Mar 08 22:02:01.648727 master-0 kubenswrapper[7480]: I0308 22:02:01.648561 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="registry-server" Mar 08 22:02:01.648727 master-0 kubenswrapper[7480]: I0308 22:02:01.648662 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5857b3d0-0865-4ffd-bcc9-3c139c575209" containerName="registry-server" Mar 08 22:02:01.648727 master-0 kubenswrapper[7480]: I0308 22:02:01.648677 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d0aed3-8d57-472f-a48a-14ac41d6575f" containerName="registry-server" Mar 08 22:02:01.649359 master-0 kubenswrapper[7480]: I0308 22:02:01.649338 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.654507 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-4m8r8" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.654883 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.655049 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.655202 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.655338 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.655471 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.658324 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj"] Mar 08 22:02:01.660096 master-0 kubenswrapper[7480]: I0308 22:02:01.659060 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx"] Mar 08 22:02:01.660688 master-0 kubenswrapper[7480]: I0308 22:02:01.659177 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.667100 master-0 kubenswrapper[7480]: I0308 22:02:01.663006 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.674083 master-0 kubenswrapper[7480]: I0308 22:02:01.673348 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-fk6p8" Mar 08 22:02:01.674083 master-0 kubenswrapper[7480]: I0308 22:02:01.673634 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.686531 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.686746 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.686872 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-wjqj5" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.687030 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.687415 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.687538 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.687670 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:02:01.688007 master-0 kubenswrapper[7480]: I0308 22:02:01.687846 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 22:02:01.695149 master-0 kubenswrapper[7480]: I0308 22:02:01.694111 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj"] Mar 08 22:02:01.695149 master-0 kubenswrapper[7480]: I0308 22:02:01.694168 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf"] Mar 08 22:02:01.759094 master-0 kubenswrapper[7480]: I0308 22:02:01.756305 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh"] Mar 08 22:02:01.759094 master-0 kubenswrapper[7480]: I0308 22:02:01.757028 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:01.785274 master-0 kubenswrapper[7480]: I0308 22:02:01.783290 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh"] Mar 08 22:02:01.794968 master-0 kubenswrapper[7480]: I0308 22:02:01.792774 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 22:02:01.794968 master-0 kubenswrapper[7480]: I0308 22:02:01.793125 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-qdmfw" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802597 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802668 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rht7d\" (UniqueName: \"kubernetes.io/projected/5ed9a4ec-9460-4e67-a372-ec6920c54832-kube-api-access-rht7d\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802694 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802718 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802749 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ed9a4ec-9460-4e67-a372-ec6920c54832-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802790 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp86m\" (UniqueName: \"kubernetes.io/projected/3e38e989-41b8-4c80-99fb-8d414dda5da1-kube-api-access-jp86m\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802821 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfdv\" (UniqueName: \"kubernetes.io/projected/1ef14467-bb62-462d-9dec-dee43e4cc9bd-kube-api-access-6tfdv\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802839 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802861 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802884 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802906 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802930 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5ed9a4ec-9460-4e67-a372-ec6920c54832-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.806102 master-0 kubenswrapper[7480]: I0308 22:02:01.802951 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.811802 master-0 kubenswrapper[7480]: I0308 22:02:01.811726 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-955fcfb87-tn4pc_2c2c4964-678e-46ac-a500-8efc6b8255d9/machine-approver-controller/0.log" Mar 08 22:02:01.812690 master-0 kubenswrapper[7480]: I0308 22:02:01.812590 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 22:02:01.904315 master-0 kubenswrapper[7480]: I0308 22:02:01.904182 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgnsn\" (UniqueName: \"kubernetes.io/projected/2c2c4964-678e-46ac-a500-8efc6b8255d9-kube-api-access-lgnsn\") pod \"2c2c4964-678e-46ac-a500-8efc6b8255d9\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " Mar 08 22:02:01.904315 master-0 kubenswrapper[7480]: I0308 22:02:01.904315 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-auth-proxy-config\") pod \"2c2c4964-678e-46ac-a500-8efc6b8255d9\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " Mar 08 22:02:01.904579 master-0 kubenswrapper[7480]: I0308 22:02:01.904343 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c2c4964-678e-46ac-a500-8efc6b8255d9-machine-approver-tls\") pod \"2c2c4964-678e-46ac-a500-8efc6b8255d9\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " Mar 08 22:02:01.904579 master-0 kubenswrapper[7480]: I0308 22:02:01.904383 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-config\") pod \"2c2c4964-678e-46ac-a500-8efc6b8255d9\" (UID: \"2c2c4964-678e-46ac-a500-8efc6b8255d9\") " Mar 08 22:02:01.904579 master-0 kubenswrapper[7480]: I0308 22:02:01.904518 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:01.904579 master-0 kubenswrapper[7480]: I0308 22:02:01.904562 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp86m\" (UniqueName: \"kubernetes.io/projected/3e38e989-41b8-4c80-99fb-8d414dda5da1-kube-api-access-jp86m\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.904694 master-0 kubenswrapper[7480]: I0308 22:02:01.904589 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfdv\" (UniqueName: \"kubernetes.io/projected/1ef14467-bb62-462d-9dec-dee43e4cc9bd-kube-api-access-6tfdv\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.904694 master-0 kubenswrapper[7480]: I0308 22:02:01.904609 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.904694 master-0 kubenswrapper[7480]: I0308 22:02:01.904629 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.904694 master-0 kubenswrapper[7480]: I0308 22:02:01.904648 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-tmpfs\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:01.904694 master-0 kubenswrapper[7480]: I0308 22:02:01.904666 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.904694 master-0 kubenswrapper[7480]: I0308 22:02:01.904685 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904706 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904722 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5ed9a4ec-9460-4e67-a372-ec6920c54832-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904740 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhp8w\" (UniqueName: \"kubernetes.io/projected/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-kube-api-access-lhp8w\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904760 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904776 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904796 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rht7d\" (UniqueName: \"kubernetes.io/projected/5ed9a4ec-9460-4e67-a372-ec6920c54832-kube-api-access-rht7d\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904816 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904834 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.904863 master-0 kubenswrapper[7480]: I0308 22:02:01.904856 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ed9a4ec-9460-4e67-a372-ec6920c54832-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.906592 master-0 kubenswrapper[7480]: I0308 22:02:01.906518 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "2c2c4964-678e-46ac-a500-8efc6b8255d9" (UID: "2c2c4964-678e-46ac-a500-8efc6b8255d9"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:02:01.907317 master-0 kubenswrapper[7480]: I0308 22:02:01.907262 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-config" (OuterVolumeSpecName: "config") pod "2c2c4964-678e-46ac-a500-8efc6b8255d9" (UID: "2c2c4964-678e-46ac-a500-8efc6b8255d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:02:01.907401 master-0 kubenswrapper[7480]: I0308 22:02:01.907375 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.907628 master-0 kubenswrapper[7480]: I0308 22:02:01.907572 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.907843 master-0 kubenswrapper[7480]: I0308 22:02:01.907790 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.907896 master-0 kubenswrapper[7480]: I0308 22:02:01.907832 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5ed9a4ec-9460-4e67-a372-ec6920c54832-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.908225 master-0 kubenswrapper[7480]: I0308 22:02:01.908207 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.908856 master-0 kubenswrapper[7480]: I0308 22:02:01.908838 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.908943 master-0 kubenswrapper[7480]: I0308 22:02:01.908851 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.910698 master-0 kubenswrapper[7480]: I0308 22:02:01.910600 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ed9a4ec-9460-4e67-a372-ec6920c54832-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.911569 master-0 kubenswrapper[7480]: I0308 22:02:01.911526 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2c4964-678e-46ac-a500-8efc6b8255d9-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "2c2c4964-678e-46ac-a500-8efc6b8255d9" (UID: "2c2c4964-678e-46ac-a500-8efc6b8255d9"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:02:01.912484 master-0 kubenswrapper[7480]: I0308 22:02:01.912448 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.913009 master-0 kubenswrapper[7480]: I0308 22:02:01.912970 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.923553 master-0 kubenswrapper[7480]: I0308 22:02:01.923501 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2c4964-678e-46ac-a500-8efc6b8255d9-kube-api-access-lgnsn" (OuterVolumeSpecName: "kube-api-access-lgnsn") pod "2c2c4964-678e-46ac-a500-8efc6b8255d9" (UID: "2c2c4964-678e-46ac-a500-8efc6b8255d9"). InnerVolumeSpecName "kube-api-access-lgnsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:02:01.927032 master-0 kubenswrapper[7480]: I0308 22:02:01.926966 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfdv\" (UniqueName: \"kubernetes.io/projected/1ef14467-bb62-462d-9dec-dee43e4cc9bd-kube-api-access-6tfdv\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:01.928227 master-0 kubenswrapper[7480]: I0308 22:02:01.928141 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rht7d\" (UniqueName: \"kubernetes.io/projected/5ed9a4ec-9460-4e67-a372-ec6920c54832-kube-api-access-rht7d\") pod \"cluster-cloud-controller-manager-operator-559568b945-kmbpx\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:01.928481 master-0 kubenswrapper[7480]: I0308 22:02:01.928447 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp86m\" (UniqueName: \"kubernetes.io/projected/3e38e989-41b8-4c80-99fb-8d414dda5da1-kube-api-access-jp86m\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:01.953321 master-0 kubenswrapper[7480]: I0308 22:02:01.953241 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-955fcfb87-tn4pc_2c2c4964-678e-46ac-a500-8efc6b8255d9/machine-approver-controller/0.log" Mar 08 22:02:01.954580 master-0 kubenswrapper[7480]: I0308 22:02:01.954542 7480 generic.go:334] "Generic (PLEG): container finished" podID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerID="b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418" exitCode=0 Mar 08 22:02:01.954580 master-0 kubenswrapper[7480]: I0308 22:02:01.954581 7480 generic.go:334] "Generic (PLEG): container finished" podID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerID="e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce" exitCode=0 Mar 08 22:02:01.954678 master-0 kubenswrapper[7480]: I0308 22:02:01.954609 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerDied","Data":"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418"} Mar 08 22:02:01.955155 master-0 kubenswrapper[7480]: I0308 22:02:01.955137 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerDied","Data":"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce"} Mar 08 22:02:01.955155 master-0 kubenswrapper[7480]: I0308 22:02:01.955158 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" event={"ID":"2c2c4964-678e-46ac-a500-8efc6b8255d9","Type":"ContainerDied","Data":"627ace5b53c8effa9e246bfd6af99dbd08bf8878208542c3b1c00eb2182540ad"} Mar 08 22:02:01.955262 master-0 kubenswrapper[7480]: I0308 22:02:01.955181 7480 scope.go:117] "RemoveContainer" containerID="b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418" Mar 08 22:02:01.955349 master-0 kubenswrapper[7480]: I0308 22:02:01.955311 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc" Mar 08 22:02:01.971347 master-0 kubenswrapper[7480]: I0308 22:02:01.971276 7480 scope.go:117] "RemoveContainer" containerID="ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7" Mar 08 22:02:01.995561 master-0 kubenswrapper[7480]: I0308 22:02:01.995483 7480 scope.go:117] "RemoveContainer" containerID="e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce" Mar 08 22:02:02.001907 master-0 kubenswrapper[7480]: I0308 22:02:02.001651 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc"] Mar 08 22:02:02.007502 master-0 kubenswrapper[7480]: I0308 22:02:02.007433 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-tmpfs\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.007689 master-0 kubenswrapper[7480]: I0308 22:02:02.007511 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp8w\" (UniqueName: \"kubernetes.io/projected/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-kube-api-access-lhp8w\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.007689 master-0 kubenswrapper[7480]: I0308 22:02:02.007555 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.007689 master-0 kubenswrapper[7480]: I0308 22:02:02.007603 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.007689 master-0 kubenswrapper[7480]: I0308 22:02:02.007667 7480 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:02.007689 master-0 kubenswrapper[7480]: I0308 22:02:02.007686 7480 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c2c4964-678e-46ac-a500-8efc6b8255d9-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:02.007861 master-0 kubenswrapper[7480]: I0308 22:02:02.007702 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2c4964-678e-46ac-a500-8efc6b8255d9-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:02.007861 master-0 kubenswrapper[7480]: I0308 22:02:02.007716 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgnsn\" (UniqueName: \"kubernetes.io/projected/2c2c4964-678e-46ac-a500-8efc6b8255d9-kube-api-access-lgnsn\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:02.009102 master-0 kubenswrapper[7480]: I0308 22:02:02.009041 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-tmpfs\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.011169 master-0 kubenswrapper[7480]: I0308 22:02:02.011136 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.012339 master-0 kubenswrapper[7480]: I0308 22:02:02.011585 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-tn4pc"] Mar 08 22:02:02.016556 master-0 kubenswrapper[7480]: I0308 22:02:02.016390 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.023684 master-0 kubenswrapper[7480]: I0308 22:02:02.023616 7480 scope.go:117] "RemoveContainer" containerID="b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418" Mar 08 22:02:02.025315 master-0 kubenswrapper[7480]: E0308 22:02:02.025188 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418\": container with ID starting with b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418 not found: ID does not exist" containerID="b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418" Mar 08 22:02:02.025315 master-0 kubenswrapper[7480]: I0308 22:02:02.025249 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418"} err="failed to get container status \"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418\": rpc error: code = NotFound desc = could not find container \"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418\": container with ID starting with b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418 not found: ID does not exist" Mar 08 22:02:02.025315 master-0 kubenswrapper[7480]: I0308 22:02:02.025281 7480 scope.go:117] "RemoveContainer" containerID="ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: E0308 22:02:02.025926 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7\": container with ID starting with ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7 not found: ID does not exist" containerID="ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: I0308 22:02:02.025957 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7"} err="failed to get container status \"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7\": rpc error: code = NotFound desc = could not find container \"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7\": container with ID starting with ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7 not found: ID does not exist" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: I0308 22:02:02.025974 7480 scope.go:117] "RemoveContainer" containerID="e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: E0308 22:02:02.026422 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce\": container with ID starting with e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce not found: ID does not exist" containerID="e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: I0308 22:02:02.026448 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce"} err="failed to get container status \"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce\": rpc error: code = NotFound desc = could not find container \"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce\": container with ID starting with e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce not found: ID does not exist" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: I0308 22:02:02.026466 7480 scope.go:117] "RemoveContainer" containerID="b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: I0308 22:02:02.026709 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418"} err="failed to get container status \"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418\": rpc error: code = NotFound desc = could not find container \"b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418\": container with ID starting with b20c11f43ccd9db5da682c3bedb94aac8d3ca9373e24359998912b8b40125418 not found: ID does not exist" Mar 08 22:02:02.026714 master-0 kubenswrapper[7480]: I0308 22:02:02.026729 7480 scope.go:117] "RemoveContainer" containerID="ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7" Mar 08 22:02:02.030240 master-0 kubenswrapper[7480]: I0308 22:02:02.028324 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7"} err="failed to get container status \"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7\": rpc error: code = NotFound desc = could not find container \"ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7\": container with ID starting with ad82ee8e15ddd42271a76ddef01cd2ec252bf21267dece8cdc826dbc50b614f7 not found: ID does not exist" Mar 08 22:02:02.030240 master-0 kubenswrapper[7480]: I0308 22:02:02.028391 7480 scope.go:117] "RemoveContainer" containerID="e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce" Mar 08 22:02:02.030240 master-0 kubenswrapper[7480]: I0308 22:02:02.028772 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce"} err="failed to get container status \"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce\": rpc error: code = NotFound desc = could not find container \"e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce\": container with ID starting with e1437f5db69404d802631f01a40eb772816aa23d22b78b82add335d21a0b77ce not found: ID does not exist" Mar 08 22:02:02.033554 master-0 kubenswrapper[7480]: I0308 22:02:02.033444 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp8w\" (UniqueName: \"kubernetes.io/projected/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-kube-api-access-lhp8w\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.036528 master-0 kubenswrapper[7480]: I0308 22:02:02.036486 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.051384 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg"] Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: E0308 22:02:02.052024 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.052054 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: E0308 22:02:02.052151 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.052232 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: E0308 22:02:02.052308 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="kube-rbac-proxy" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.052324 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="kube-rbac-proxy" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.052683 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="kube-rbac-proxy" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.052720 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" Mar 08 22:02:02.053292 master-0 kubenswrapper[7480]: I0308 22:02:02.052784 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" containerName="machine-approver-controller" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.054287 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.058432 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-cgg74" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.058514 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.058715 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.058777 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.059406 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 22:02:02.060379 master-0 kubenswrapper[7480]: I0308 22:02:02.059544 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 22:02:02.081491 master-0 kubenswrapper[7480]: I0308 22:02:02.081436 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:02:02.099899 master-0 kubenswrapper[7480]: I0308 22:02:02.099862 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:02.125268 master-0 kubenswrapper[7480]: W0308 22:02:02.125217 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ed9a4ec_9460_4e67_a372_ec6920c54832.slice/crio-0813d0cf4cf35a09a96a86777576a3fc55351040c297180f6267d3a48b50a9ac WatchSource:0}: Error finding container 0813d0cf4cf35a09a96a86777576a3fc55351040c297180f6267d3a48b50a9ac: Status 404 returned error can't find the container with id 0813d0cf4cf35a09a96a86777576a3fc55351040c297180f6267d3a48b50a9ac Mar 08 22:02:02.155554 master-0 kubenswrapper[7480]: I0308 22:02:02.155443 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.215105 master-0 kubenswrapper[7480]: I0308 22:02:02.212204 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.215105 master-0 kubenswrapper[7480]: I0308 22:02:02.212306 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.215105 master-0 kubenswrapper[7480]: I0308 22:02:02.212354 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.215105 master-0 kubenswrapper[7480]: I0308 22:02:02.212410 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxvr\" (UniqueName: \"kubernetes.io/projected/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-kube-api-access-gxxvr\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.315001 master-0 kubenswrapper[7480]: I0308 22:02:02.314949 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.315125 master-0 kubenswrapper[7480]: I0308 22:02:02.314054 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.315125 master-0 kubenswrapper[7480]: I0308 22:02:02.315053 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.315125 master-0 kubenswrapper[7480]: I0308 22:02:02.315092 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.315228 master-0 kubenswrapper[7480]: I0308 22:02:02.315136 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxvr\" (UniqueName: \"kubernetes.io/projected/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-kube-api-access-gxxvr\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.315989 master-0 kubenswrapper[7480]: I0308 22:02:02.315952 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.326052 master-0 kubenswrapper[7480]: I0308 22:02:02.325969 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.333048 master-0 kubenswrapper[7480]: I0308 22:02:02.332580 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxvr\" (UniqueName: \"kubernetes.io/projected/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-kube-api-access-gxxvr\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.376919 master-0 kubenswrapper[7480]: I0308 22:02:02.376835 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:02:02.396367 master-0 kubenswrapper[7480]: W0308 22:02:02.396305 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cbc6c17_7c16_435f_9399_b6f1ddb6d17f.slice/crio-c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9 WatchSource:0}: Error finding container c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9: Status 404 returned error can't find the container with id c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9 Mar 08 22:02:02.441558 master-0 kubenswrapper[7480]: I0308 22:02:02.441507 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf"] Mar 08 22:02:02.446656 master-0 kubenswrapper[7480]: W0308 22:02:02.446590 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e38e989_41b8_4c80_99fb_8d414dda5da1.slice/crio-1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7 WatchSource:0}: Error finding container 1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7: Status 404 returned error can't find the container with id 1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7 Mar 08 22:02:02.557206 master-0 kubenswrapper[7480]: I0308 22:02:02.557060 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj"] Mar 08 22:02:02.611809 master-0 kubenswrapper[7480]: I0308 22:02:02.605491 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh"] Mar 08 22:02:02.615835 master-0 kubenswrapper[7480]: W0308 22:02:02.615564 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e2eb05c_eaa5_4d9b_abad_c0ef6835087e.slice/crio-e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf WatchSource:0}: Error finding container e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf: Status 404 returned error can't find the container with id e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf Mar 08 22:02:02.971336 master-0 kubenswrapper[7480]: I0308 22:02:02.971253 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerStarted","Data":"0813d0cf4cf35a09a96a86777576a3fc55351040c297180f6267d3a48b50a9ac"} Mar 08 22:02:02.973169 master-0 kubenswrapper[7480]: I0308 22:02:02.973113 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" event={"ID":"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e","Type":"ContainerStarted","Data":"c9f45339dc296c60cee9cd8facd74fa45cd8d922e460c120ae31130a8da944c9"} Mar 08 22:02:02.973247 master-0 kubenswrapper[7480]: I0308 22:02:02.973194 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" event={"ID":"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e","Type":"ContainerStarted","Data":"e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf"} Mar 08 22:02:02.976842 master-0 kubenswrapper[7480]: I0308 22:02:02.976773 7480 patch_prober.go:28] interesting pod/packageserver-f988cd549-68kmh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.68:5443/healthz\": dial tcp 10.128.0.68:5443: connect: connection refused" start-of-body= Mar 08 22:02:02.976929 master-0 kubenswrapper[7480]: I0308 22:02:02.976841 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" podUID="4e2eb05c-eaa5-4d9b-abad-c0ef6835087e" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.68:5443/healthz\": dial tcp 10.128.0.68:5443: connect: connection refused" Mar 08 22:02:02.978004 master-0 kubenswrapper[7480]: I0308 22:02:02.977922 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:02.979708 master-0 kubenswrapper[7480]: I0308 22:02:02.979635 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"77814812894cae312166fb4b1d60568f421a6441a0acb548490be9a3f80f4c65"} Mar 08 22:02:02.979824 master-0 kubenswrapper[7480]: I0308 22:02:02.979720 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"a3c825039f429bbbe3e7e27ef1491ff9c435ad7f4d68ed1d1f7b0b138f9a2544"} Mar 08 22:02:02.982244 master-0 kubenswrapper[7480]: I0308 22:02:02.982132 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"04ea7cefcb78239f13efed84a01c73c9c7a659eaa2abd9abb2c9410ed57bcc52"} Mar 08 22:02:02.982244 master-0 kubenswrapper[7480]: I0308 22:02:02.982175 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9"} Mar 08 22:02:02.991670 master-0 kubenswrapper[7480]: I0308 22:02:02.991516 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"5bbd0df97183d8637c0e656471f38367a5ad7905a4855ed56a03e62c7164dbdd"} Mar 08 22:02:02.991670 master-0 kubenswrapper[7480]: I0308 22:02:02.991571 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"6ed8d9b29a081602db7df52fa208e1ced8636f34e50cd9dbcb9d6a6d48cd183e"} Mar 08 22:02:02.991670 master-0 kubenswrapper[7480]: I0308 22:02:02.991585 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7"} Mar 08 22:02:03.024522 master-0 kubenswrapper[7480]: I0308 22:02:03.024397 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" podStartSLOduration=2.024372997 podStartE2EDuration="2.024372997s" podCreationTimestamp="2026-03-08 22:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:02.995743257 +0000 UTC m=+273.449363879" watchObservedRunningTime="2026-03-08 22:02:03.024372997 +0000 UTC m=+273.477993599" Mar 08 22:02:03.026620 master-0 kubenswrapper[7480]: I0308 22:02:03.026566 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" podStartSLOduration=2.026558214 podStartE2EDuration="2.026558214s" podCreationTimestamp="2026-03-08 22:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:03.02255976 +0000 UTC m=+273.476180372" watchObservedRunningTime="2026-03-08 22:02:03.026558214 +0000 UTC m=+273.480178816" Mar 08 22:02:03.798719 master-0 kubenswrapper[7480]: I0308 22:02:03.798643 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2c4964-678e-46ac-a500-8efc6b8255d9" path="/var/lib/kubelet/pods/2c2c4964-678e-46ac-a500-8efc6b8255d9/volumes" Mar 08 22:02:04.015150 master-0 kubenswrapper[7480]: I0308 22:02:04.013923 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"4c252b52dc72b4cf9a688685e68fed111ec3680baa86d43719d7d70d42220e79"} Mar 08 22:02:04.021522 master-0 kubenswrapper[7480]: I0308 22:02:04.021468 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:02:04.043970 master-0 kubenswrapper[7480]: I0308 22:02:04.043393 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" podStartSLOduration=2.043361207 podStartE2EDuration="2.043361207s" podCreationTimestamp="2026-03-08 22:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:04.036855909 +0000 UTC m=+274.490476551" watchObservedRunningTime="2026-03-08 22:02:04.043361207 +0000 UTC m=+274.496981809" Mar 08 22:02:06.122256 master-0 kubenswrapper[7480]: I0308 22:02:06.122186 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q669r"] Mar 08 22:02:06.123126 master-0 kubenswrapper[7480]: I0308 22:02:06.123063 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.126051 master-0 kubenswrapper[7480]: I0308 22:02:06.126000 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fnk6l" Mar 08 22:02:06.126161 master-0 kubenswrapper[7480]: I0308 22:02:06.126116 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 22:02:06.281743 master-0 kubenswrapper[7480]: I0308 22:02:06.281666 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.282008 master-0 kubenswrapper[7480]: I0308 22:02:06.281775 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdtk\" (UniqueName: \"kubernetes.io/projected/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-kube-api-access-shdtk\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.282008 master-0 kubenswrapper[7480]: I0308 22:02:06.281825 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-rootfs\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.282008 master-0 kubenswrapper[7480]: I0308 22:02:06.281851 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.385204 master-0 kubenswrapper[7480]: I0308 22:02:06.383608 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.385204 master-0 kubenswrapper[7480]: I0308 22:02:06.383695 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdtk\" (UniqueName: \"kubernetes.io/projected/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-kube-api-access-shdtk\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.385204 master-0 kubenswrapper[7480]: I0308 22:02:06.383724 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.385204 master-0 kubenswrapper[7480]: I0308 22:02:06.383743 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-rootfs\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.385204 master-0 kubenswrapper[7480]: I0308 22:02:06.383851 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-rootfs\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.386777 master-0 kubenswrapper[7480]: I0308 22:02:06.386740 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.387257 master-0 kubenswrapper[7480]: I0308 22:02:06.387231 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.421461 master-0 kubenswrapper[7480]: I0308 22:02:06.421398 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdtk\" (UniqueName: \"kubernetes.io/projected/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-kube-api-access-shdtk\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:06.463821 master-0 kubenswrapper[7480]: I0308 22:02:06.463760 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:02:08.068896 master-0 kubenswrapper[7480]: I0308 22:02:08.068292 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerStarted","Data":"a4e327b57c620233ccd28a765190266e9a842db3606cdccec529793103ce7cd8"} Mar 08 22:02:08.078214 master-0 kubenswrapper[7480]: I0308 22:02:08.075702 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q669r" event={"ID":"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3","Type":"ContainerStarted","Data":"4c8a0efa9298dfa9e5a85238c8444d06b35c3a684b882cab8d59cc5684624441"} Mar 08 22:02:08.078214 master-0 kubenswrapper[7480]: I0308 22:02:08.075774 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q669r" event={"ID":"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3","Type":"ContainerStarted","Data":"2e34987c76ae3161515e58a685409125bb3c2f2c0b1e13425d28a3f54cc0d97c"} Mar 08 22:02:08.097241 master-0 kubenswrapper[7480]: I0308 22:02:08.097139 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q669r" podStartSLOduration=2.097112405 podStartE2EDuration="2.097112405s" podCreationTimestamp="2026-03-08 22:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:08.095403141 +0000 UTC m=+278.549023753" watchObservedRunningTime="2026-03-08 22:02:08.097112405 +0000 UTC m=+278.550733007" Mar 08 22:02:09.084728 master-0 kubenswrapper[7480]: I0308 22:02:09.084653 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerStarted","Data":"ee260c45deb9e624f61d9e398299b8f6c2fb8df9d89676292ad77613ca07be93"} Mar 08 22:02:09.084728 master-0 kubenswrapper[7480]: I0308 22:02:09.084723 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerStarted","Data":"7240955b26ac1e289f96f129a5f6efabc10b20b49058f009c3b04a5f39d8facc"} Mar 08 22:02:09.087164 master-0 kubenswrapper[7480]: I0308 22:02:09.087117 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q669r" event={"ID":"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3","Type":"ContainerStarted","Data":"a4b49acdc17f72dccdea435d19b95ddc086fac3671e588788c4c65e2f7e9dc9b"} Mar 08 22:02:09.239159 master-0 kubenswrapper[7480]: I0308 22:02:09.239060 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" podStartSLOduration=2.657877717 podStartE2EDuration="8.239033982s" podCreationTimestamp="2026-03-08 22:02:01 +0000 UTC" firstStartedPulling="2026-03-08 22:02:02.132168103 +0000 UTC m=+272.585788705" lastFinishedPulling="2026-03-08 22:02:07.713324378 +0000 UTC m=+278.166944970" observedRunningTime="2026-03-08 22:02:09.237163273 +0000 UTC m=+279.690783885" watchObservedRunningTime="2026-03-08 22:02:09.239033982 +0000 UTC m=+279.692654584" Mar 08 22:02:11.344648 master-0 kubenswrapper[7480]: I0308 22:02:11.344589 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m"] Mar 08 22:02:11.345907 master-0 kubenswrapper[7480]: I0308 22:02:11.345390 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.348920 master-0 kubenswrapper[7480]: I0308 22:02:11.348861 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-jfbvc" Mar 08 22:02:11.349041 master-0 kubenswrapper[7480]: I0308 22:02:11.348930 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 22:02:11.448906 master-0 kubenswrapper[7480]: I0308 22:02:11.446139 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m"] Mar 08 22:02:11.500591 master-0 kubenswrapper[7480]: I0308 22:02:11.500527 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.500862 master-0 kubenswrapper[7480]: I0308 22:02:11.500749 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52wj\" (UniqueName: \"kubernetes.io/projected/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-kube-api-access-c52wj\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.500862 master-0 kubenswrapper[7480]: I0308 22:02:11.500797 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.602010 master-0 kubenswrapper[7480]: I0308 22:02:11.601860 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.602010 master-0 kubenswrapper[7480]: I0308 22:02:11.601951 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.602010 master-0 kubenswrapper[7480]: I0308 22:02:11.601979 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52wj\" (UniqueName: \"kubernetes.io/projected/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-kube-api-access-c52wj\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.603545 master-0 kubenswrapper[7480]: I0308 22:02:11.603486 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.617542 master-0 kubenswrapper[7480]: I0308 22:02:11.617488 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52wj\" (UniqueName: \"kubernetes.io/projected/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-kube-api-access-c52wj\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.620774 master-0 kubenswrapper[7480]: I0308 22:02:11.620737 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:11.668981 master-0 kubenswrapper[7480]: I0308 22:02:11.668880 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:02:12.099258 master-0 kubenswrapper[7480]: I0308 22:02:12.099201 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m"] Mar 08 22:02:14.778551 master-0 kubenswrapper[7480]: W0308 22:02:14.778448 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6bc6f78_2c5c_4add_925f_f6568a49c2cc.slice/crio-ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf WatchSource:0}: Error finding container ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf: Status 404 returned error can't find the container with id ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf Mar 08 22:02:15.150718 master-0 kubenswrapper[7480]: I0308 22:02:15.150606 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"0871d5393b2287077e78ea4cabbc123965065d582cc608c8130a11e8d227ebf0"} Mar 08 22:02:15.150718 master-0 kubenswrapper[7480]: I0308 22:02:15.150694 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"ea9d698fbce1d205747d5157a6c744e1ac0246ad5c16718bbe3cc568d31c44f2"} Mar 08 22:02:15.150718 master-0 kubenswrapper[7480]: I0308 22:02:15.150717 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf"} Mar 08 22:02:15.161186 master-0 kubenswrapper[7480]: I0308 22:02:15.161124 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"8c5935d4c8ced0d1522d2fa823597581df0f0db73a8f0870aa81ef671ab128d8"} Mar 08 22:02:15.163766 master-0 kubenswrapper[7480]: I0308 22:02:15.163433 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-4fsdl"] Mar 08 22:02:15.164867 master-0 kubenswrapper[7480]: I0308 22:02:15.164847 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.168032 master-0 kubenswrapper[7480]: I0308 22:02:15.167741 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 22:02:15.168032 master-0 kubenswrapper[7480]: I0308 22:02:15.167860 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 22:02:15.169244 master-0 kubenswrapper[7480]: I0308 22:02:15.169198 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 22:02:15.169492 master-0 kubenswrapper[7480]: I0308 22:02:15.169441 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 22:02:15.169674 master-0 kubenswrapper[7480]: I0308 22:02:15.169629 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 22:02:15.176210 master-0 kubenswrapper[7480]: I0308 22:02:15.175696 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 22:02:15.179961 master-0 kubenswrapper[7480]: I0308 22:02:15.179926 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl"] Mar 08 22:02:15.180951 master-0 kubenswrapper[7480]: I0308 22:02:15.180925 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:15.182863 master-0 kubenswrapper[7480]: I0308 22:02:15.182738 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 22:02:15.184457 master-0 kubenswrapper[7480]: I0308 22:02:15.184413 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp"] Mar 08 22:02:15.185443 master-0 kubenswrapper[7480]: I0308 22:02:15.185408 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:02:15.208267 master-0 kubenswrapper[7480]: I0308 22:02:15.208189 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp"] Mar 08 22:02:15.226408 master-0 kubenswrapper[7480]: I0308 22:02:15.226321 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" podStartSLOduration=4.226295331 podStartE2EDuration="4.226295331s" podCreationTimestamp="2026-03-08 22:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:15.202623129 +0000 UTC m=+285.656243741" watchObservedRunningTime="2026-03-08 22:02:15.226295331 +0000 UTC m=+285.679915933" Mar 08 22:02:15.228574 master-0 kubenswrapper[7480]: I0308 22:02:15.228542 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl"] Mar 08 22:02:15.260139 master-0 kubenswrapper[7480]: I0308 22:02:15.260030 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-default-certificate\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.260139 master-0 kubenswrapper[7480]: I0308 22:02:15.260140 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kz92\" (UniqueName: \"kubernetes.io/projected/81f5ed55-225c-41e2-bc9d-b41063a604c9-kube-api-access-7kz92\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.260405 master-0 kubenswrapper[7480]: I0308 22:02:15.260188 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f5ed55-225c-41e2-bc9d-b41063a604c9-service-ca-bundle\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.260405 master-0 kubenswrapper[7480]: I0308 22:02:15.260209 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-stats-auth\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.260405 master-0 kubenswrapper[7480]: I0308 22:02:15.260257 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-metrics-certs\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.366479 master-0 kubenswrapper[7480]: I0308 22:02:15.366421 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kz92\" (UniqueName: \"kubernetes.io/projected/81f5ed55-225c-41e2-bc9d-b41063a604c9-kube-api-access-7kz92\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.366763 master-0 kubenswrapper[7480]: I0308 22:02:15.366705 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f5ed55-225c-41e2-bc9d-b41063a604c9-service-ca-bundle\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.366832 master-0 kubenswrapper[7480]: I0308 22:02:15.366810 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-stats-auth\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.366905 master-0 kubenswrapper[7480]: I0308 22:02:15.366882 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-metrics-certs\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.366969 master-0 kubenswrapper[7480]: I0308 22:02:15.366950 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e635b0da-956b-4636-bc9b-61f231241908-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kx9pl\" (UID: \"e635b0da-956b-4636-bc9b-61f231241908\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:15.367101 master-0 kubenswrapper[7480]: I0308 22:02:15.367053 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-default-certificate\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.367143 master-0 kubenswrapper[7480]: I0308 22:02:15.367127 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjndf\" (UniqueName: \"kubernetes.io/projected/10e2e81b-cd18-4e30-b8ad-4cf105daea4a-kube-api-access-sjndf\") pod \"network-check-source-7c67b67d47-qf2dp\" (UID: \"10e2e81b-cd18-4e30-b8ad-4cf105daea4a\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:02:15.367789 master-0 kubenswrapper[7480]: I0308 22:02:15.367755 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f5ed55-225c-41e2-bc9d-b41063a604c9-service-ca-bundle\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.372052 master-0 kubenswrapper[7480]: I0308 22:02:15.370425 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-default-certificate\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.372052 master-0 kubenswrapper[7480]: I0308 22:02:15.371130 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-stats-auth\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.374282 master-0 kubenswrapper[7480]: I0308 22:02:15.374205 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-metrics-certs\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.408586 master-0 kubenswrapper[7480]: I0308 22:02:15.408536 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kz92\" (UniqueName: \"kubernetes.io/projected/81f5ed55-225c-41e2-bc9d-b41063a604c9-kube-api-access-7kz92\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.423820 master-0 kubenswrapper[7480]: I0308 22:02:15.423785 7480 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 22:02:15.470918 master-0 kubenswrapper[7480]: I0308 22:02:15.470827 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e635b0da-956b-4636-bc9b-61f231241908-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kx9pl\" (UID: \"e635b0da-956b-4636-bc9b-61f231241908\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:15.471669 master-0 kubenswrapper[7480]: I0308 22:02:15.471625 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjndf\" (UniqueName: \"kubernetes.io/projected/10e2e81b-cd18-4e30-b8ad-4cf105daea4a-kube-api-access-sjndf\") pod \"network-check-source-7c67b67d47-qf2dp\" (UID: \"10e2e81b-cd18-4e30-b8ad-4cf105daea4a\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:02:15.484023 master-0 kubenswrapper[7480]: I0308 22:02:15.483973 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e635b0da-956b-4636-bc9b-61f231241908-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kx9pl\" (UID: \"e635b0da-956b-4636-bc9b-61f231241908\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:15.499540 master-0 kubenswrapper[7480]: I0308 22:02:15.499398 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:15.514725 master-0 kubenswrapper[7480]: I0308 22:02:15.514418 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:15.518395 master-0 kubenswrapper[7480]: I0308 22:02:15.518357 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjndf\" (UniqueName: \"kubernetes.io/projected/10e2e81b-cd18-4e30-b8ad-4cf105daea4a-kube-api-access-sjndf\") pod \"network-check-source-7c67b67d47-qf2dp\" (UID: \"10e2e81b-cd18-4e30-b8ad-4cf105daea4a\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:02:15.533717 master-0 kubenswrapper[7480]: I0308 22:02:15.533656 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:02:16.003676 master-0 kubenswrapper[7480]: I0308 22:02:16.003480 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" podStartSLOduration=2.960802114 podStartE2EDuration="15.003458572s" podCreationTimestamp="2026-03-08 22:02:01 +0000 UTC" firstStartedPulling="2026-03-08 22:02:02.889179004 +0000 UTC m=+273.342799606" lastFinishedPulling="2026-03-08 22:02:14.931835462 +0000 UTC m=+285.385456064" observedRunningTime="2026-03-08 22:02:15.364215135 +0000 UTC m=+285.817835747" watchObservedRunningTime="2026-03-08 22:02:16.003458572 +0000 UTC m=+286.457079174" Mar 08 22:02:16.007453 master-0 kubenswrapper[7480]: I0308 22:02:16.006613 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl"] Mar 08 22:02:16.098918 master-0 kubenswrapper[7480]: I0308 22:02:16.098855 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp"] Mar 08 22:02:16.169971 master-0 kubenswrapper[7480]: I0308 22:02:16.169866 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" event={"ID":"e635b0da-956b-4636-bc9b-61f231241908","Type":"ContainerStarted","Data":"c3c767d6aca988650063d67045483c4316fb23551293f63bcb6227962e14fff7"} Mar 08 22:02:16.171977 master-0 kubenswrapper[7480]: I0308 22:02:16.171898 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"546b6a60e0c7d74e50a429925cb5072388fd5ebf8c592233957d28ac0705b80e"} Mar 08 22:02:16.173736 master-0 kubenswrapper[7480]: I0308 22:02:16.173680 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" event={"ID":"10e2e81b-cd18-4e30-b8ad-4cf105daea4a","Type":"ContainerStarted","Data":"0c50be0fc3f4780032df6f771d4507e5bf45df79f6025c39b105620c89303b83"} Mar 08 22:02:17.193207 master-0 kubenswrapper[7480]: I0308 22:02:17.190742 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" event={"ID":"10e2e81b-cd18-4e30-b8ad-4cf105daea4a","Type":"ContainerStarted","Data":"292b7794be112451b21f81dda371f9e3caaf1ae93aa6bd4111a752df3e06bcb2"} Mar 08 22:02:17.228823 master-0 kubenswrapper[7480]: I0308 22:02:17.217006 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx"] Mar 08 22:02:17.228823 master-0 kubenswrapper[7480]: I0308 22:02:17.221876 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="cluster-cloud-controller-manager" containerID="cri-o://a4e327b57c620233ccd28a765190266e9a842db3606cdccec529793103ce7cd8" gracePeriod=30 Mar 08 22:02:17.228823 master-0 kubenswrapper[7480]: I0308 22:02:17.222126 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="kube-rbac-proxy" containerID="cri-o://ee260c45deb9e624f61d9e398299b8f6c2fb8df9d89676292ad77613ca07be93" gracePeriod=30 Mar 08 22:02:17.228823 master-0 kubenswrapper[7480]: I0308 22:02:17.222188 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="config-sync-controllers" containerID="cri-o://7240955b26ac1e289f96f129a5f6efabc10b20b49058f009c3b04a5f39d8facc" gracePeriod=30 Mar 08 22:02:17.236561 master-0 kubenswrapper[7480]: I0308 22:02:17.234233 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" podStartSLOduration=338.234202844 podStartE2EDuration="5m38.234202844s" podCreationTimestamp="2026-03-08 21:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:17.228737243 +0000 UTC m=+287.682357845" watchObservedRunningTime="2026-03-08 22:02:17.234202844 +0000 UTC m=+287.687823456" Mar 08 22:02:17.888089 master-0 kubenswrapper[7480]: I0308 22:02:17.886313 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-svxwz"] Mar 08 22:02:17.888426 master-0 kubenswrapper[7480]: I0308 22:02:17.888194 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:17.891222 master-0 kubenswrapper[7480]: I0308 22:02:17.891190 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 22:02:17.891564 master-0 kubenswrapper[7480]: I0308 22:02:17.891479 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-clq2r" Mar 08 22:02:17.891841 master-0 kubenswrapper[7480]: I0308 22:02:17.891813 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 22:02:17.938128 master-0 kubenswrapper[7480]: I0308 22:02:17.938021 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:17.938128 master-0 kubenswrapper[7480]: I0308 22:02:17.938126 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7xb\" (UniqueName: \"kubernetes.io/projected/4b5246dc-b715-4678-a3a9-878df57dd236-kube-api-access-hq7xb\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:17.938403 master-0 kubenswrapper[7480]: I0308 22:02:17.938172 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.039514 master-0 kubenswrapper[7480]: I0308 22:02:18.039439 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.039514 master-0 kubenswrapper[7480]: I0308 22:02:18.039508 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7xb\" (UniqueName: \"kubernetes.io/projected/4b5246dc-b715-4678-a3a9-878df57dd236-kube-api-access-hq7xb\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.039790 master-0 kubenswrapper[7480]: I0308 22:02:18.039686 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.047194 master-0 kubenswrapper[7480]: I0308 22:02:18.047147 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.050712 master-0 kubenswrapper[7480]: I0308 22:02:18.050657 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.062904 master-0 kubenswrapper[7480]: I0308 22:02:18.060673 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7xb\" (UniqueName: \"kubernetes.io/projected/4b5246dc-b715-4678-a3a9-878df57dd236-kube-api-access-hq7xb\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.200633 master-0 kubenswrapper[7480]: I0308 22:02:18.200471 7480 generic.go:334] "Generic (PLEG): container finished" podID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerID="ee260c45deb9e624f61d9e398299b8f6c2fb8df9d89676292ad77613ca07be93" exitCode=0 Mar 08 22:02:18.200633 master-0 kubenswrapper[7480]: I0308 22:02:18.200531 7480 generic.go:334] "Generic (PLEG): container finished" podID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerID="7240955b26ac1e289f96f129a5f6efabc10b20b49058f009c3b04a5f39d8facc" exitCode=0 Mar 08 22:02:18.200633 master-0 kubenswrapper[7480]: I0308 22:02:18.200547 7480 generic.go:334] "Generic (PLEG): container finished" podID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerID="a4e327b57c620233ccd28a765190266e9a842db3606cdccec529793103ce7cd8" exitCode=0 Mar 08 22:02:18.201701 master-0 kubenswrapper[7480]: I0308 22:02:18.200828 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerDied","Data":"ee260c45deb9e624f61d9e398299b8f6c2fb8df9d89676292ad77613ca07be93"} Mar 08 22:02:18.201701 master-0 kubenswrapper[7480]: I0308 22:02:18.200865 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerDied","Data":"7240955b26ac1e289f96f129a5f6efabc10b20b49058f009c3b04a5f39d8facc"} Mar 08 22:02:18.201701 master-0 kubenswrapper[7480]: I0308 22:02:18.200877 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerDied","Data":"a4e327b57c620233ccd28a765190266e9a842db3606cdccec529793103ce7cd8"} Mar 08 22:02:18.225148 master-0 kubenswrapper[7480]: I0308 22:02:18.225024 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:02:18.313115 master-0 kubenswrapper[7480]: I0308 22:02:18.313030 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:18.314422 master-0 kubenswrapper[7480]: W0308 22:02:18.314380 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b5246dc_b715_4678_a3a9_878df57dd236.slice/crio-44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab WatchSource:0}: Error finding container 44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab: Status 404 returned error can't find the container with id 44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab Mar 08 22:02:18.442987 master-0 kubenswrapper[7480]: I0308 22:02:18.442930 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-auth-proxy-config\") pod \"5ed9a4ec-9460-4e67-a372-ec6920c54832\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " Mar 08 22:02:18.443205 master-0 kubenswrapper[7480]: I0308 22:02:18.443061 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ed9a4ec-9460-4e67-a372-ec6920c54832-cloud-controller-manager-operator-tls\") pod \"5ed9a4ec-9460-4e67-a372-ec6920c54832\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " Mar 08 22:02:18.443205 master-0 kubenswrapper[7480]: I0308 22:02:18.443125 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-images\") pod \"5ed9a4ec-9460-4e67-a372-ec6920c54832\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " Mar 08 22:02:18.443205 master-0 kubenswrapper[7480]: I0308 22:02:18.443188 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rht7d\" (UniqueName: \"kubernetes.io/projected/5ed9a4ec-9460-4e67-a372-ec6920c54832-kube-api-access-rht7d\") pod \"5ed9a4ec-9460-4e67-a372-ec6920c54832\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " Mar 08 22:02:18.443352 master-0 kubenswrapper[7480]: I0308 22:02:18.443229 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5ed9a4ec-9460-4e67-a372-ec6920c54832-host-etc-kube\") pod \"5ed9a4ec-9460-4e67-a372-ec6920c54832\" (UID: \"5ed9a4ec-9460-4e67-a372-ec6920c54832\") " Mar 08 22:02:18.443570 master-0 kubenswrapper[7480]: I0308 22:02:18.443522 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed9a4ec-9460-4e67-a372-ec6920c54832-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "5ed9a4ec-9460-4e67-a372-ec6920c54832" (UID: "5ed9a4ec-9460-4e67-a372-ec6920c54832"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:02:18.443706 master-0 kubenswrapper[7480]: I0308 22:02:18.443660 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "5ed9a4ec-9460-4e67-a372-ec6920c54832" (UID: "5ed9a4ec-9460-4e67-a372-ec6920c54832"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:02:18.444265 master-0 kubenswrapper[7480]: I0308 22:02:18.444217 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-images" (OuterVolumeSpecName: "images") pod "5ed9a4ec-9460-4e67-a372-ec6920c54832" (UID: "5ed9a4ec-9460-4e67-a372-ec6920c54832"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:02:18.447921 master-0 kubenswrapper[7480]: I0308 22:02:18.447860 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed9a4ec-9460-4e67-a372-ec6920c54832-kube-api-access-rht7d" (OuterVolumeSpecName: "kube-api-access-rht7d") pod "5ed9a4ec-9460-4e67-a372-ec6920c54832" (UID: "5ed9a4ec-9460-4e67-a372-ec6920c54832"). InnerVolumeSpecName "kube-api-access-rht7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:02:18.448456 master-0 kubenswrapper[7480]: I0308 22:02:18.448405 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed9a4ec-9460-4e67-a372-ec6920c54832-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "5ed9a4ec-9460-4e67-a372-ec6920c54832" (UID: "5ed9a4ec-9460-4e67-a372-ec6920c54832"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:02:18.544858 master-0 kubenswrapper[7480]: I0308 22:02:18.544762 7480 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:18.544858 master-0 kubenswrapper[7480]: I0308 22:02:18.544809 7480 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ed9a4ec-9460-4e67-a372-ec6920c54832-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:18.544858 master-0 kubenswrapper[7480]: I0308 22:02:18.544821 7480 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ed9a4ec-9460-4e67-a372-ec6920c54832-images\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:18.544858 master-0 kubenswrapper[7480]: I0308 22:02:18.544833 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rht7d\" (UniqueName: \"kubernetes.io/projected/5ed9a4ec-9460-4e67-a372-ec6920c54832-kube-api-access-rht7d\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:18.544858 master-0 kubenswrapper[7480]: I0308 22:02:18.544846 7480 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5ed9a4ec-9460-4e67-a372-ec6920c54832-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 08 22:02:19.213098 master-0 kubenswrapper[7480]: I0308 22:02:19.212989 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" event={"ID":"e635b0da-956b-4636-bc9b-61f231241908","Type":"ContainerStarted","Data":"10bf0b2fa0214d3d300f54a6ad731b796e7eda2be6d3ed5948e65d2b920e7ced"} Mar 08 22:02:19.214230 master-0 kubenswrapper[7480]: I0308 22:02:19.214170 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:19.222940 master-0 kubenswrapper[7480]: I0308 22:02:19.222863 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" event={"ID":"5ed9a4ec-9460-4e67-a372-ec6920c54832","Type":"ContainerDied","Data":"0813d0cf4cf35a09a96a86777576a3fc55351040c297180f6267d3a48b50a9ac"} Mar 08 22:02:19.223249 master-0 kubenswrapper[7480]: I0308 22:02:19.222962 7480 scope.go:117] "RemoveContainer" containerID="ee260c45deb9e624f61d9e398299b8f6c2fb8df9d89676292ad77613ca07be93" Mar 08 22:02:19.223323 master-0 kubenswrapper[7480]: I0308 22:02:19.223246 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx" Mar 08 22:02:19.227390 master-0 kubenswrapper[7480]: I0308 22:02:19.227314 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"043bea0bfcad80d082009c992d1913377d82e97e1ea5f2b55356dd0fdc8a2c8f"} Mar 08 22:02:19.229251 master-0 kubenswrapper[7480]: I0308 22:02:19.228967 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:02:19.229710 master-0 kubenswrapper[7480]: I0308 22:02:19.229639 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svxwz" event={"ID":"4b5246dc-b715-4678-a3a9-878df57dd236","Type":"ContainerStarted","Data":"8622091cf260a9c109c08c1a2cfc7b6b626d8462a700065181f25b83cce99b0c"} Mar 08 22:02:19.229710 master-0 kubenswrapper[7480]: I0308 22:02:19.229688 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svxwz" event={"ID":"4b5246dc-b715-4678-a3a9-878df57dd236","Type":"ContainerStarted","Data":"44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab"} Mar 08 22:02:19.249825 master-0 kubenswrapper[7480]: I0308 22:02:19.249773 7480 scope.go:117] "RemoveContainer" containerID="7240955b26ac1e289f96f129a5f6efabc10b20b49058f009c3b04a5f39d8facc" Mar 08 22:02:19.258685 master-0 kubenswrapper[7480]: I0308 22:02:19.258533 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" podStartSLOduration=259.972960535 podStartE2EDuration="4m22.258498192s" podCreationTimestamp="2026-03-08 21:57:57 +0000 UTC" firstStartedPulling="2026-03-08 22:02:16.020880363 +0000 UTC m=+286.474500965" lastFinishedPulling="2026-03-08 22:02:18.30641801 +0000 UTC m=+288.760038622" observedRunningTime="2026-03-08 22:02:19.250253639 +0000 UTC m=+289.703874281" watchObservedRunningTime="2026-03-08 22:02:19.258498192 +0000 UTC m=+289.712118804" Mar 08 22:02:19.293112 master-0 kubenswrapper[7480]: I0308 22:02:19.292628 7480 scope.go:117] "RemoveContainer" containerID="a4e327b57c620233ccd28a765190266e9a842db3606cdccec529793103ce7cd8" Mar 08 22:02:19.315086 master-0 kubenswrapper[7480]: I0308 22:02:19.314927 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-svxwz" podStartSLOduration=2.314895529 podStartE2EDuration="2.314895529s" podCreationTimestamp="2026-03-08 22:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:19.28204028 +0000 UTC m=+289.735660882" watchObservedRunningTime="2026-03-08 22:02:19.314895529 +0000 UTC m=+289.768516171" Mar 08 22:02:19.350048 master-0 kubenswrapper[7480]: I0308 22:02:19.349136 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podStartSLOduration=258.585215886 podStartE2EDuration="4m21.349040292s" podCreationTimestamp="2026-03-08 21:57:58 +0000 UTC" firstStartedPulling="2026-03-08 22:02:15.548602879 +0000 UTC m=+286.002223481" lastFinishedPulling="2026-03-08 22:02:18.312427275 +0000 UTC m=+288.766047887" observedRunningTime="2026-03-08 22:02:19.343940519 +0000 UTC m=+289.797561141" watchObservedRunningTime="2026-03-08 22:02:19.349040292 +0000 UTC m=+289.802660924" Mar 08 22:02:19.384465 master-0 kubenswrapper[7480]: I0308 22:02:19.384385 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx"] Mar 08 22:02:19.390108 master-0 kubenswrapper[7480]: I0308 22:02:19.390042 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-kmbpx"] Mar 08 22:02:19.416442 master-0 kubenswrapper[7480]: I0308 22:02:19.416365 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh"] Mar 08 22:02:19.416892 master-0 kubenswrapper[7480]: E0308 22:02:19.416857 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="config-sync-controllers" Mar 08 22:02:19.416892 master-0 kubenswrapper[7480]: I0308 22:02:19.416880 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="config-sync-controllers" Mar 08 22:02:19.416892 master-0 kubenswrapper[7480]: E0308 22:02:19.416889 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="cluster-cloud-controller-manager" Mar 08 22:02:19.416892 master-0 kubenswrapper[7480]: I0308 22:02:19.416896 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="cluster-cloud-controller-manager" Mar 08 22:02:19.417048 master-0 kubenswrapper[7480]: E0308 22:02:19.416904 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="kube-rbac-proxy" Mar 08 22:02:19.417048 master-0 kubenswrapper[7480]: I0308 22:02:19.416912 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="kube-rbac-proxy" Mar 08 22:02:19.417048 master-0 kubenswrapper[7480]: I0308 22:02:19.417027 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="config-sync-controllers" Mar 08 22:02:19.417048 master-0 kubenswrapper[7480]: I0308 22:02:19.417038 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="kube-rbac-proxy" Mar 08 22:02:19.417048 master-0 kubenswrapper[7480]: I0308 22:02:19.417045 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" containerName="cluster-cloud-controller-manager" Mar 08 22:02:19.418049 master-0 kubenswrapper[7480]: I0308 22:02:19.418014 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.428895 master-0 kubenswrapper[7480]: I0308 22:02:19.428843 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 22:02:19.429180 master-0 kubenswrapper[7480]: I0308 22:02:19.429134 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 22:02:19.429326 master-0 kubenswrapper[7480]: I0308 22:02:19.429281 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-wjqj5" Mar 08 22:02:19.429714 master-0 kubenswrapper[7480]: I0308 22:02:19.429666 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 22:02:19.429868 master-0 kubenswrapper[7480]: I0308 22:02:19.429830 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:02:19.440099 master-0 kubenswrapper[7480]: I0308 22:02:19.436468 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 22:02:19.470188 master-0 kubenswrapper[7480]: I0308 22:02:19.462985 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v2k8\" (UniqueName: \"kubernetes.io/projected/d063b330-4180-43de-a248-c573183d96f1-kube-api-access-8v2k8\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.470188 master-0 kubenswrapper[7480]: I0308 22:02:19.463110 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.470188 master-0 kubenswrapper[7480]: I0308 22:02:19.463164 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d063b330-4180-43de-a248-c573183d96f1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.470188 master-0 kubenswrapper[7480]: I0308 22:02:19.463221 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.470188 master-0 kubenswrapper[7480]: I0308 22:02:19.463347 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.504119 master-0 kubenswrapper[7480]: I0308 22:02:19.503211 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:19.513200 master-0 kubenswrapper[7480]: I0308 22:02:19.511272 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:19.513200 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:19.513200 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:19.513200 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:19.513200 master-0 kubenswrapper[7480]: I0308 22:02:19.511331 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:19.564699 master-0 kubenswrapper[7480]: I0308 22:02:19.564633 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.565089 master-0 kubenswrapper[7480]: I0308 22:02:19.565014 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v2k8\" (UniqueName: \"kubernetes.io/projected/d063b330-4180-43de-a248-c573183d96f1-kube-api-access-8v2k8\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.565174 master-0 kubenswrapper[7480]: I0308 22:02:19.565153 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.565214 master-0 kubenswrapper[7480]: I0308 22:02:19.565205 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d063b330-4180-43de-a248-c573183d96f1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.565246 master-0 kubenswrapper[7480]: I0308 22:02:19.565236 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.565771 master-0 kubenswrapper[7480]: I0308 22:02:19.565735 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.565859 master-0 kubenswrapper[7480]: I0308 22:02:19.565838 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d063b330-4180-43de-a248-c573183d96f1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.566305 master-0 kubenswrapper[7480]: I0308 22:02:19.566276 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.568898 master-0 kubenswrapper[7480]: I0308 22:02:19.568843 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.582230 master-0 kubenswrapper[7480]: I0308 22:02:19.582193 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v2k8\" (UniqueName: \"kubernetes.io/projected/d063b330-4180-43de-a248-c573183d96f1-kube-api-access-8v2k8\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.599998 master-0 kubenswrapper[7480]: I0308 22:02:19.599925 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9"] Mar 08 22:02:19.600805 master-0 kubenswrapper[7480]: I0308 22:02:19.600778 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.602407 master-0 kubenswrapper[7480]: I0308 22:02:19.602355 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 22:02:19.602588 master-0 kubenswrapper[7480]: I0308 22:02:19.602560 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 22:02:19.603434 master-0 kubenswrapper[7480]: I0308 22:02:19.603398 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-2sq4s" Mar 08 22:02:19.603596 master-0 kubenswrapper[7480]: I0308 22:02:19.603556 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 22:02:19.619910 master-0 kubenswrapper[7480]: I0308 22:02:19.619845 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9"] Mar 08 22:02:19.666979 master-0 kubenswrapper[7480]: I0308 22:02:19.666920 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdz7m\" (UniqueName: \"kubernetes.io/projected/8a7e92d4-b7ed-408b-b7cf-00209a627bea-kube-api-access-qdz7m\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.667252 master-0 kubenswrapper[7480]: I0308 22:02:19.666995 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.667252 master-0 kubenswrapper[7480]: I0308 22:02:19.667036 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.667408 master-0 kubenswrapper[7480]: I0308 22:02:19.667322 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.747892 master-0 kubenswrapper[7480]: I0308 22:02:19.747297 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:02:19.766846 master-0 kubenswrapper[7480]: W0308 22:02:19.766762 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd063b330_4180_43de_a248_c573183d96f1.slice/crio-f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a WatchSource:0}: Error finding container f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a: Status 404 returned error can't find the container with id f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a Mar 08 22:02:19.769198 master-0 kubenswrapper[7480]: I0308 22:02:19.768405 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdz7m\" (UniqueName: \"kubernetes.io/projected/8a7e92d4-b7ed-408b-b7cf-00209a627bea-kube-api-access-qdz7m\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.769198 master-0 kubenswrapper[7480]: I0308 22:02:19.768470 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.769198 master-0 kubenswrapper[7480]: I0308 22:02:19.768505 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.769198 master-0 kubenswrapper[7480]: I0308 22:02:19.768556 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.770763 master-0 kubenswrapper[7480]: I0308 22:02:19.770428 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.772258 master-0 kubenswrapper[7480]: I0308 22:02:19.771653 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.777400 master-0 kubenswrapper[7480]: I0308 22:02:19.777363 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.793423 master-0 kubenswrapper[7480]: I0308 22:02:19.793244 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdz7m\" (UniqueName: \"kubernetes.io/projected/8a7e92d4-b7ed-408b-b7cf-00209a627bea-kube-api-access-qdz7m\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:19.794140 master-0 kubenswrapper[7480]: I0308 22:02:19.794003 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed9a4ec-9460-4e67-a372-ec6920c54832" path="/var/lib/kubelet/pods/5ed9a4ec-9460-4e67-a372-ec6920c54832/volumes" Mar 08 22:02:19.926777 master-0 kubenswrapper[7480]: I0308 22:02:19.926711 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:02:20.240094 master-0 kubenswrapper[7480]: I0308 22:02:20.240021 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"6db16eaa3133d25587d14c0b9e526e3d55af3b3bbd2fa785bac1c1b404fb50fd"} Mar 08 22:02:20.240094 master-0 kubenswrapper[7480]: I0308 22:02:20.240085 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a"} Mar 08 22:02:20.344720 master-0 kubenswrapper[7480]: I0308 22:02:20.344668 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9"] Mar 08 22:02:20.349381 master-0 kubenswrapper[7480]: W0308 22:02:20.349356 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a7e92d4_b7ed_408b_b7cf_00209a627bea.slice/crio-41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07 WatchSource:0}: Error finding container 41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07: Status 404 returned error can't find the container with id 41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07 Mar 08 22:02:20.505101 master-0 kubenswrapper[7480]: I0308 22:02:20.505006 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:20.505101 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:20.505101 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:20.505101 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:20.505402 master-0 kubenswrapper[7480]: I0308 22:02:20.505107 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:21.251879 master-0 kubenswrapper[7480]: I0308 22:02:21.250952 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" event={"ID":"8a7e92d4-b7ed-408b-b7cf-00209a627bea","Type":"ContainerStarted","Data":"41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07"} Mar 08 22:02:21.257922 master-0 kubenswrapper[7480]: I0308 22:02:21.257661 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"defdda10b4f2af3f2f0aeb50bfb3ec0613908d04158d59043799bc29da0a720e"} Mar 08 22:02:21.257922 master-0 kubenswrapper[7480]: I0308 22:02:21.257695 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"f35f20071c5b0df4134c3bd22227a8034ca2417ef7250451b3ec29b800fa74dc"} Mar 08 22:02:21.502495 master-0 kubenswrapper[7480]: I0308 22:02:21.502302 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:21.502495 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:21.502495 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:21.502495 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:21.502495 master-0 kubenswrapper[7480]: I0308 22:02:21.502400 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:22.265754 master-0 kubenswrapper[7480]: I0308 22:02:22.265685 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" event={"ID":"8a7e92d4-b7ed-408b-b7cf-00209a627bea","Type":"ContainerStarted","Data":"3e9ee4ba2b30507c13973fee0309fba4893b4e5e93df158a36a62373121b00ef"} Mar 08 22:02:22.503201 master-0 kubenswrapper[7480]: I0308 22:02:22.503004 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:22.503201 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:22.503201 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:22.503201 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:22.503497 master-0 kubenswrapper[7480]: I0308 22:02:22.503197 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:23.274312 master-0 kubenswrapper[7480]: I0308 22:02:23.274244 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" event={"ID":"8a7e92d4-b7ed-408b-b7cf-00209a627bea","Type":"ContainerStarted","Data":"5bd0cf5d8baf3a2aa869e1e1bdc081c235c25122fbe0ed40a05cf502e6556dd7"} Mar 08 22:02:23.298016 master-0 kubenswrapper[7480]: I0308 22:02:23.297934 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" podStartSLOduration=4.297911618 podStartE2EDuration="4.297911618s" podCreationTimestamp="2026-03-08 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:02:21.278751384 +0000 UTC m=+291.732371996" watchObservedRunningTime="2026-03-08 22:02:23.297911618 +0000 UTC m=+293.751532230" Mar 08 22:02:23.298382 master-0 kubenswrapper[7480]: I0308 22:02:23.298350 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" podStartSLOduration=2.698836367 podStartE2EDuration="4.298344348s" podCreationTimestamp="2026-03-08 22:02:19 +0000 UTC" firstStartedPulling="2026-03-08 22:02:20.354198163 +0000 UTC m=+290.807818765" lastFinishedPulling="2026-03-08 22:02:21.953706144 +0000 UTC m=+292.407326746" observedRunningTime="2026-03-08 22:02:23.296590523 +0000 UTC m=+293.750211135" watchObservedRunningTime="2026-03-08 22:02:23.298344348 +0000 UTC m=+293.751964950" Mar 08 22:02:23.503919 master-0 kubenswrapper[7480]: I0308 22:02:23.503839 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:23.503919 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:23.503919 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:23.503919 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:23.504263 master-0 kubenswrapper[7480]: I0308 22:02:23.503971 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:24.504746 master-0 kubenswrapper[7480]: I0308 22:02:24.504660 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:24.504746 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:24.504746 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:24.504746 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:24.504746 master-0 kubenswrapper[7480]: I0308 22:02:24.504755 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:25.001543 master-0 kubenswrapper[7480]: I0308 22:02:25.001477 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8"] Mar 08 22:02:25.002969 master-0 kubenswrapper[7480]: I0308 22:02:25.002949 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.011622 master-0 kubenswrapper[7480]: I0308 22:02:25.011575 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 22:02:25.011897 master-0 kubenswrapper[7480]: I0308 22:02:25.011648 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 22:02:25.014509 master-0 kubenswrapper[7480]: I0308 22:02:25.014464 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cfkxm" Mar 08 22:02:25.026579 master-0 kubenswrapper[7480]: I0308 22:02:25.026509 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8"] Mar 08 22:02:25.074045 master-0 kubenswrapper[7480]: I0308 22:02:25.073945 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-l8k5g"] Mar 08 22:02:25.084565 master-0 kubenswrapper[7480]: I0308 22:02:25.084509 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.085767 master-0 kubenswrapper[7480]: I0308 22:02:25.085665 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc"] Mar 08 22:02:25.088956 master-0 kubenswrapper[7480]: I0308 22:02:25.088893 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.089344 master-0 kubenswrapper[7480]: I0308 22:02:25.089292 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 22:02:25.089405 master-0 kubenswrapper[7480]: I0308 22:02:25.089333 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 22:02:25.089490 master-0 kubenswrapper[7480]: I0308 22:02:25.089303 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ldwk8" Mar 08 22:02:25.093151 master-0 kubenswrapper[7480]: I0308 22:02:25.093098 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-t7cwt" Mar 08 22:02:25.099429 master-0 kubenswrapper[7480]: I0308 22:02:25.099361 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 22:02:25.099664 master-0 kubenswrapper[7480]: I0308 22:02:25.099636 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 22:02:25.099863 master-0 kubenswrapper[7480]: I0308 22:02:25.099821 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 22:02:25.128710 master-0 kubenswrapper[7480]: I0308 22:02:25.128653 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc"] Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.187802 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.187870 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.187909 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-sys\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.187936 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-wtmp\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.187964 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.188010 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.188033 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-textfile\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188092 master-0 kubenswrapper[7480]: I0308 22:02:25.188062 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fp4g\" (UniqueName: \"kubernetes.io/projected/0269ed52-a753-49aa-9c38-c7aee23cebbd-kube-api-access-8fp4g\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188519 master-0 kubenswrapper[7480]: I0308 22:02:25.188121 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4s4\" (UniqueName: \"kubernetes.io/projected/c377685c-2024-4ef7-932d-5858eeb0d9bd-kube-api-access-4z4s4\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.188519 master-0 kubenswrapper[7480]: I0308 22:02:25.188150 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.188519 master-0 kubenswrapper[7480]: I0308 22:02:25.188181 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.188519 master-0 kubenswrapper[7480]: I0308 22:02:25.188201 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-root\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289345 master-0 kubenswrapper[7480]: I0308 22:02:25.289208 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.289345 master-0 kubenswrapper[7480]: I0308 22:02:25.289255 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-root\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289345 master-0 kubenswrapper[7480]: I0308 22:02:25.289290 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c3af41e9-c604-48da-bec5-df81c2ef3374-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.289345 master-0 kubenswrapper[7480]: I0308 22:02:25.289317 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289345 master-0 kubenswrapper[7480]: I0308 22:02:25.289337 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289361 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-sys\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289378 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2nfk\" (UniqueName: \"kubernetes.io/projected/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-api-access-z2nfk\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289396 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-wtmp\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289415 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289441 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289466 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289490 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-textfile\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289514 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289547 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fp4g\" (UniqueName: \"kubernetes.io/projected/0269ed52-a753-49aa-9c38-c7aee23cebbd-kube-api-access-8fp4g\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289581 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4s4\" (UniqueName: \"kubernetes.io/projected/c377685c-2024-4ef7-932d-5858eeb0d9bd-kube-api-access-4z4s4\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.289692 master-0 kubenswrapper[7480]: I0308 22:02:25.289605 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.289982 master-0 kubenswrapper[7480]: I0308 22:02:25.289709 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.289982 master-0 kubenswrapper[7480]: I0308 22:02:25.289790 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.290047 master-0 kubenswrapper[7480]: I0308 22:02:25.289795 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-wtmp\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.290615 master-0 kubenswrapper[7480]: I0308 22:02:25.290591 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.299159 master-0 kubenswrapper[7480]: I0308 22:02:25.294851 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.299159 master-0 kubenswrapper[7480]: I0308 22:02:25.294900 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-root\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.299159 master-0 kubenswrapper[7480]: I0308 22:02:25.295445 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.299159 master-0 kubenswrapper[7480]: I0308 22:02:25.295667 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-textfile\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.299159 master-0 kubenswrapper[7480]: I0308 22:02:25.295706 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-sys\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.299159 master-0 kubenswrapper[7480]: I0308 22:02:25.299059 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.315093 master-0 kubenswrapper[7480]: I0308 22:02:25.311219 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.317410 master-0 kubenswrapper[7480]: I0308 22:02:25.317389 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.338795 master-0 kubenswrapper[7480]: I0308 22:02:25.338122 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4s4\" (UniqueName: \"kubernetes.io/projected/c377685c-2024-4ef7-932d-5858eeb0d9bd-kube-api-access-4z4s4\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.341867 master-0 kubenswrapper[7480]: I0308 22:02:25.341089 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fp4g\" (UniqueName: \"kubernetes.io/projected/0269ed52-a753-49aa-9c38-c7aee23cebbd-kube-api-access-8fp4g\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.391216 master-0 kubenswrapper[7480]: I0308 22:02:25.391149 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2nfk\" (UniqueName: \"kubernetes.io/projected/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-api-access-z2nfk\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.391476 master-0 kubenswrapper[7480]: I0308 22:02:25.391427 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.391633 master-0 kubenswrapper[7480]: I0308 22:02:25.391608 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.391839 master-0 kubenswrapper[7480]: I0308 22:02:25.391802 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.391953 master-0 kubenswrapper[7480]: I0308 22:02:25.391931 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.392344 master-0 kubenswrapper[7480]: I0308 22:02:25.392008 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c3af41e9-c604-48da-bec5-df81c2ef3374-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.392344 master-0 kubenswrapper[7480]: I0308 22:02:25.392283 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.392705 master-0 kubenswrapper[7480]: I0308 22:02:25.392643 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.392705 master-0 kubenswrapper[7480]: I0308 22:02:25.392669 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c3af41e9-c604-48da-bec5-df81c2ef3374-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.395504 master-0 kubenswrapper[7480]: I0308 22:02:25.395272 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.395504 master-0 kubenswrapper[7480]: I0308 22:02:25.395465 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.416006 master-0 kubenswrapper[7480]: I0308 22:02:25.413265 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2nfk\" (UniqueName: \"kubernetes.io/projected/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-api-access-z2nfk\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.431411 master-0 kubenswrapper[7480]: I0308 22:02:25.423118 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:02:25.435318 master-0 kubenswrapper[7480]: I0308 22:02:25.435268 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:02:25.464797 master-0 kubenswrapper[7480]: W0308 22:02:25.463322 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0269ed52_a753_49aa_9c38_c7aee23cebbd.slice/crio-dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e WatchSource:0}: Error finding container dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e: Status 404 returned error can't find the container with id dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e Mar 08 22:02:25.500186 master-0 kubenswrapper[7480]: I0308 22:02:25.500142 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:02:25.502640 master-0 kubenswrapper[7480]: I0308 22:02:25.502607 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:25.502640 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:25.502640 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:25.502640 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:25.502760 master-0 kubenswrapper[7480]: I0308 22:02:25.502652 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:25.620808 master-0 kubenswrapper[7480]: I0308 22:02:25.620743 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:02:25.763080 master-0 kubenswrapper[7480]: I0308 22:02:25.763019 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc"] Mar 08 22:02:25.767841 master-0 kubenswrapper[7480]: W0308 22:02:25.767766 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3af41e9_c604_48da_bec5_df81c2ef3374.slice/crio-128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9 WatchSource:0}: Error finding container 128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9: Status 404 returned error can't find the container with id 128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9 Mar 08 22:02:26.105117 master-0 kubenswrapper[7480]: I0308 22:02:26.103013 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8"] Mar 08 22:02:26.297781 master-0 kubenswrapper[7480]: I0308 22:02:26.297727 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerStarted","Data":"dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e"} Mar 08 22:02:26.299476 master-0 kubenswrapper[7480]: I0308 22:02:26.299441 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"01e7e6db40b352d1bb5e058f335eb116c496e54948df30ad1e0dec47816a596f"} Mar 08 22:02:26.299580 master-0 kubenswrapper[7480]: I0308 22:02:26.299488 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"dcce2795ffc43a6cd86e6b9ec76eb643d8b1c22dbdc50b3b5ab3767ff2108c08"} Mar 08 22:02:26.300693 master-0 kubenswrapper[7480]: I0308 22:02:26.300649 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9"} Mar 08 22:02:26.504627 master-0 kubenswrapper[7480]: I0308 22:02:26.504564 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:26.504627 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:26.504627 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:26.504627 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:26.504935 master-0 kubenswrapper[7480]: I0308 22:02:26.504664 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:27.312971 master-0 kubenswrapper[7480]: I0308 22:02:27.312885 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"636096a563f9790ad280be64875e151f0e3aea218ca6c330e59deb5dc7006700"} Mar 08 22:02:27.504965 master-0 kubenswrapper[7480]: I0308 22:02:27.504881 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:27.504965 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:27.504965 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:27.504965 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:27.505415 master-0 kubenswrapper[7480]: I0308 22:02:27.505009 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:28.323274 master-0 kubenswrapper[7480]: I0308 22:02:28.323150 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"f84e2a09ee0c2b94b3a029e14eeb278827a7b20e5cab6340015020baa528a8ed"} Mar 08 22:02:28.327047 master-0 kubenswrapper[7480]: I0308 22:02:28.326980 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"ecaf1243154dde279f8eb70fb3208ec4c39a8e7c7a27d9a0976f08303916202f"} Mar 08 22:02:28.327162 master-0 kubenswrapper[7480]: I0308 22:02:28.327075 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"9c46876fc3ed9e88b423e5e3303487fe77ad4ea83416a3a3950db6e6ac947b05"} Mar 08 22:02:28.327162 master-0 kubenswrapper[7480]: I0308 22:02:28.327135 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"8ff1aa9be63274968b15bcf0a7c20df9e9315bcb35a3d281e9aba68b98723c76"} Mar 08 22:02:28.330829 master-0 kubenswrapper[7480]: I0308 22:02:28.330752 7480 generic.go:334] "Generic (PLEG): container finished" podID="0269ed52-a753-49aa-9c38-c7aee23cebbd" containerID="c9cab6e5817c1932a6f2978d3ea0dfca3946b25467cd7fa690d906acf2f08a77" exitCode=0 Mar 08 22:02:28.330929 master-0 kubenswrapper[7480]: I0308 22:02:28.330857 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerDied","Data":"c9cab6e5817c1932a6f2978d3ea0dfca3946b25467cd7fa690d906acf2f08a77"} Mar 08 22:02:28.348368 master-0 kubenswrapper[7480]: I0308 22:02:28.348275 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" podStartSLOduration=2.635618162 podStartE2EDuration="4.348250866s" podCreationTimestamp="2026-03-08 22:02:24 +0000 UTC" firstStartedPulling="2026-03-08 22:02:26.43427922 +0000 UTC m=+296.887899862" lastFinishedPulling="2026-03-08 22:02:28.146911964 +0000 UTC m=+298.600532566" observedRunningTime="2026-03-08 22:02:28.347555228 +0000 UTC m=+298.801175860" watchObservedRunningTime="2026-03-08 22:02:28.348250866 +0000 UTC m=+298.801871468" Mar 08 22:02:28.378688 master-0 kubenswrapper[7480]: I0308 22:02:28.378565 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" podStartSLOduration=1.929444615 podStartE2EDuration="3.378533949s" podCreationTimestamp="2026-03-08 22:02:25 +0000 UTC" firstStartedPulling="2026-03-08 22:02:25.773708661 +0000 UTC m=+296.227329263" lastFinishedPulling="2026-03-08 22:02:27.222797995 +0000 UTC m=+297.676418597" observedRunningTime="2026-03-08 22:02:28.375443988 +0000 UTC m=+298.829064620" watchObservedRunningTime="2026-03-08 22:02:28.378533949 +0000 UTC m=+298.832154551" Mar 08 22:02:28.502742 master-0 kubenswrapper[7480]: I0308 22:02:28.502703 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:28.502742 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:28.502742 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:28.502742 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:28.502921 master-0 kubenswrapper[7480]: I0308 22:02:28.502768 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:29.342793 master-0 kubenswrapper[7480]: I0308 22:02:29.342709 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerStarted","Data":"f1e726f349106fd18bed1f94f7bc60cc539fff615238bcc5c5225950b7dde44b"} Mar 08 22:02:29.342793 master-0 kubenswrapper[7480]: I0308 22:02:29.342806 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerStarted","Data":"1852718559e4b6931ea40cd553a1b60dcc84f807d1f0a24bae4dc5ddc83f7474"} Mar 08 22:02:29.379990 master-0 kubenswrapper[7480]: I0308 22:02:29.379804 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-l8k5g" podStartSLOduration=2.640044515 podStartE2EDuration="4.37977237s" podCreationTimestamp="2026-03-08 22:02:25 +0000 UTC" firstStartedPulling="2026-03-08 22:02:25.473960455 +0000 UTC m=+295.927581047" lastFinishedPulling="2026-03-08 22:02:27.21368828 +0000 UTC m=+297.667308902" observedRunningTime="2026-03-08 22:02:29.369782671 +0000 UTC m=+299.823403313" watchObservedRunningTime="2026-03-08 22:02:29.37977237 +0000 UTC m=+299.833393012" Mar 08 22:02:29.503766 master-0 kubenswrapper[7480]: I0308 22:02:29.503668 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:29.503766 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:29.503766 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:29.503766 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:29.504257 master-0 kubenswrapper[7480]: I0308 22:02:29.503795 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:30.503434 master-0 kubenswrapper[7480]: I0308 22:02:30.503212 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:30.503434 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:30.503434 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:30.503434 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:30.503434 master-0 kubenswrapper[7480]: I0308 22:02:30.503357 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:30.532479 master-0 kubenswrapper[7480]: I0308 22:02:30.532403 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-f5876b8d7-2222x"] Mar 08 22:02:30.533371 master-0 kubenswrapper[7480]: I0308 22:02:30.533344 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.535901 master-0 kubenswrapper[7480]: I0308 22:02:30.535830 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-xhdwj" Mar 08 22:02:30.536533 master-0 kubenswrapper[7480]: I0308 22:02:30.536467 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 22:02:30.537905 master-0 kubenswrapper[7480]: I0308 22:02:30.537864 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dv1om8r64ct8c" Mar 08 22:02:30.538251 master-0 kubenswrapper[7480]: I0308 22:02:30.538213 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 22:02:30.539460 master-0 kubenswrapper[7480]: I0308 22:02:30.539432 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 22:02:30.547608 master-0 kubenswrapper[7480]: I0308 22:02:30.547557 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 22:02:30.574801 master-0 kubenswrapper[7480]: I0308 22:02:30.574717 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f5876b8d7-2222x"] Mar 08 22:02:30.602951 master-0 kubenswrapper[7480]: I0308 22:02:30.602877 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.603152 master-0 kubenswrapper[7480]: I0308 22:02:30.602980 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.603214 master-0 kubenswrapper[7480]: I0308 22:02:30.603119 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.603623 master-0 kubenswrapper[7480]: I0308 22:02:30.603555 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.603709 master-0 kubenswrapper[7480]: I0308 22:02:30.603671 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.603770 master-0 kubenswrapper[7480]: I0308 22:02:30.603745 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.603896 master-0 kubenswrapper[7480]: I0308 22:02:30.603862 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.705986 master-0 kubenswrapper[7480]: I0308 22:02:30.705903 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.706285 master-0 kubenswrapper[7480]: I0308 22:02:30.706028 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.706285 master-0 kubenswrapper[7480]: I0308 22:02:30.706101 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.706285 master-0 kubenswrapper[7480]: I0308 22:02:30.706144 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.706285 master-0 kubenswrapper[7480]: I0308 22:02:30.706179 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.706409 master-0 kubenswrapper[7480]: I0308 22:02:30.706352 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.706630 master-0 kubenswrapper[7480]: I0308 22:02:30.706601 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.708254 master-0 kubenswrapper[7480]: I0308 22:02:30.708204 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.709228 master-0 kubenswrapper[7480]: I0308 22:02:30.709171 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.709522 master-0 kubenswrapper[7480]: I0308 22:02:30.709487 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.710797 master-0 kubenswrapper[7480]: I0308 22:02:30.710756 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.711044 master-0 kubenswrapper[7480]: I0308 22:02:30.711008 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.711547 master-0 kubenswrapper[7480]: I0308 22:02:30.711178 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.724806 master-0 kubenswrapper[7480]: I0308 22:02:30.724329 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.906235 master-0 kubenswrapper[7480]: I0308 22:02:30.906164 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:30.983279 master-0 kubenswrapper[7480]: I0308 22:02:30.983220 7480 scope.go:117] "RemoveContainer" containerID="6fd82c9a243ac415559b6058cdd8b371086e0c724a6c0dd643229ce1967ee982" Mar 08 22:02:31.383446 master-0 kubenswrapper[7480]: I0308 22:02:31.381711 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f5876b8d7-2222x"] Mar 08 22:02:31.387227 master-0 kubenswrapper[7480]: W0308 22:02:31.387166 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd589bfbb_3a7d_4617_9770_5c9ef737cb4a.slice/crio-da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede WatchSource:0}: Error finding container da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede: Status 404 returned error can't find the container with id da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede Mar 08 22:02:31.390851 master-0 kubenswrapper[7480]: I0308 22:02:31.390799 7480 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 22:02:31.504261 master-0 kubenswrapper[7480]: I0308 22:02:31.504142 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:31.504261 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:31.504261 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:31.504261 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:31.506710 master-0 kubenswrapper[7480]: I0308 22:02:31.504306 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:32.369143 master-0 kubenswrapper[7480]: I0308 22:02:32.369048 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" event={"ID":"d589bfbb-3a7d-4617-9770-5c9ef737cb4a","Type":"ContainerStarted","Data":"da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede"} Mar 08 22:02:32.504329 master-0 kubenswrapper[7480]: I0308 22:02:32.504263 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:32.504329 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:32.504329 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:32.504329 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:32.505338 master-0 kubenswrapper[7480]: I0308 22:02:32.504340 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:33.378296 master-0 kubenswrapper[7480]: I0308 22:02:33.378233 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" event={"ID":"d589bfbb-3a7d-4617-9770-5c9ef737cb4a","Type":"ContainerStarted","Data":"43a9d4a149475717fa1ef3d37fbaab396886033829072b529898dcdefcf58e78"} Mar 08 22:02:33.413517 master-0 kubenswrapper[7480]: I0308 22:02:33.413425 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" podStartSLOduration=1.600691258 podStartE2EDuration="3.413397267s" podCreationTimestamp="2026-03-08 22:02:30 +0000 UTC" firstStartedPulling="2026-03-08 22:02:31.390672971 +0000 UTC m=+301.844293573" lastFinishedPulling="2026-03-08 22:02:33.20337898 +0000 UTC m=+303.656999582" observedRunningTime="2026-03-08 22:02:33.406388306 +0000 UTC m=+303.860008938" watchObservedRunningTime="2026-03-08 22:02:33.413397267 +0000 UTC m=+303.867017889" Mar 08 22:02:33.502807 master-0 kubenswrapper[7480]: I0308 22:02:33.502644 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:33.502807 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:33.502807 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:33.502807 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:33.503369 master-0 kubenswrapper[7480]: I0308 22:02:33.503317 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:34.503204 master-0 kubenswrapper[7480]: I0308 22:02:34.502821 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:34.503204 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:34.503204 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:34.503204 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:34.503204 master-0 kubenswrapper[7480]: I0308 22:02:34.502917 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:35.504093 master-0 kubenswrapper[7480]: I0308 22:02:35.503953 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:35.504093 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:35.504093 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:35.504093 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:35.505292 master-0 kubenswrapper[7480]: I0308 22:02:35.504095 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:36.504372 master-0 kubenswrapper[7480]: I0308 22:02:36.504302 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:36.504372 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:36.504372 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:36.504372 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:36.505586 master-0 kubenswrapper[7480]: I0308 22:02:36.505540 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:37.503934 master-0 kubenswrapper[7480]: I0308 22:02:37.503823 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:37.503934 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:37.503934 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:37.503934 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:37.504427 master-0 kubenswrapper[7480]: I0308 22:02:37.503960 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:38.502772 master-0 kubenswrapper[7480]: I0308 22:02:38.502689 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:38.502772 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:38.502772 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:38.502772 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:38.503619 master-0 kubenswrapper[7480]: I0308 22:02:38.502804 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:39.503496 master-0 kubenswrapper[7480]: I0308 22:02:39.503424 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:39.503496 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:39.503496 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:39.503496 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:39.504660 master-0 kubenswrapper[7480]: I0308 22:02:39.504598 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:40.503885 master-0 kubenswrapper[7480]: I0308 22:02:40.503686 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:40.503885 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:40.503885 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:40.503885 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:40.503885 master-0 kubenswrapper[7480]: I0308 22:02:40.503794 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:41.502835 master-0 kubenswrapper[7480]: I0308 22:02:41.502746 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:41.502835 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:41.502835 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:41.502835 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:41.502835 master-0 kubenswrapper[7480]: I0308 22:02:41.502815 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:42.503771 master-0 kubenswrapper[7480]: I0308 22:02:42.503668 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:42.503771 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:42.503771 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:42.503771 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:42.503771 master-0 kubenswrapper[7480]: I0308 22:02:42.503755 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:43.503556 master-0 kubenswrapper[7480]: I0308 22:02:43.503466 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:43.503556 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:43.503556 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:43.503556 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:43.504671 master-0 kubenswrapper[7480]: I0308 22:02:43.503564 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:44.461599 master-0 kubenswrapper[7480]: I0308 22:02:44.461447 7480 generic.go:334] "Generic (PLEG): container finished" podID="66e50eed-e3ac-431f-931b-7c4c848c491b" containerID="bd2fcdaa2b69646a1f5d77c5acf0088cc640d06a976607ae2c22145452d4676a" exitCode=0 Mar 08 22:02:44.461599 master-0 kubenswrapper[7480]: I0308 22:02:44.461531 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerDied","Data":"bd2fcdaa2b69646a1f5d77c5acf0088cc640d06a976607ae2c22145452d4676a"} Mar 08 22:02:44.462499 master-0 kubenswrapper[7480]: I0308 22:02:44.462454 7480 scope.go:117] "RemoveContainer" containerID="bd2fcdaa2b69646a1f5d77c5acf0088cc640d06a976607ae2c22145452d4676a" Mar 08 22:02:44.504918 master-0 kubenswrapper[7480]: I0308 22:02:44.504866 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:44.504918 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:44.504918 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:44.504918 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:44.505561 master-0 kubenswrapper[7480]: I0308 22:02:44.504944 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:45.469847 master-0 kubenswrapper[7480]: I0308 22:02:45.469774 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerStarted","Data":"dbfa49a582d726e5ea9983357688b4a39d457da61c0391b2dbe1b2423bd4f6ec"} Mar 08 22:02:45.502652 master-0 kubenswrapper[7480]: I0308 22:02:45.502576 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:45.502652 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:45.502652 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:45.502652 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:45.502652 master-0 kubenswrapper[7480]: I0308 22:02:45.502625 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:46.503574 master-0 kubenswrapper[7480]: I0308 22:02:46.503448 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:46.503574 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:46.503574 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:46.503574 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:46.504734 master-0 kubenswrapper[7480]: I0308 22:02:46.503619 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:47.503909 master-0 kubenswrapper[7480]: I0308 22:02:47.503816 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:47.503909 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:47.503909 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:47.503909 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:47.504992 master-0 kubenswrapper[7480]: I0308 22:02:47.503934 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:48.504708 master-0 kubenswrapper[7480]: I0308 22:02:48.504587 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:48.504708 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:48.504708 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:48.504708 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:48.505805 master-0 kubenswrapper[7480]: I0308 22:02:48.504713 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:49.503842 master-0 kubenswrapper[7480]: I0308 22:02:49.503759 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:49.503842 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:49.503842 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:49.503842 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:49.504397 master-0 kubenswrapper[7480]: I0308 22:02:49.503956 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:50.502160 master-0 kubenswrapper[7480]: I0308 22:02:50.502029 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:50.502160 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:50.502160 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:50.502160 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:50.502160 master-0 kubenswrapper[7480]: I0308 22:02:50.502135 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:50.906865 master-0 kubenswrapper[7480]: I0308 22:02:50.906751 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:50.906865 master-0 kubenswrapper[7480]: I0308 22:02:50.906863 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:02:51.504418 master-0 kubenswrapper[7480]: I0308 22:02:51.504312 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:51.504418 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:51.504418 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:51.504418 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:51.505461 master-0 kubenswrapper[7480]: I0308 22:02:51.504426 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:52.503029 master-0 kubenswrapper[7480]: I0308 22:02:52.502944 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:52.503029 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:52.503029 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:52.503029 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:52.503029 master-0 kubenswrapper[7480]: I0308 22:02:52.503035 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:52.595924 master-0 kubenswrapper[7480]: I0308 22:02:52.595817 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-8h8fx_3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/authentication-operator/1.log" Mar 08 22:02:52.791119 master-0 kubenswrapper[7480]: I0308 22:02:52.790903 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-8h8fx_3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/authentication-operator/2.log" Mar 08 22:02:52.992172 master-0 kubenswrapper[7480]: I0308 22:02:52.992048 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-4fsdl_81f5ed55-225c-41e2-bc9d-b41063a604c9/router/0.log" Mar 08 22:02:53.184650 master-0 kubenswrapper[7480]: I0308 22:02:53.184602 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6bf768964c-srxfg_a5afb146-31d7-4da9-8738-b6c15528233a/fix-audit-permissions/0.log" Mar 08 22:02:53.392886 master-0 kubenswrapper[7480]: I0308 22:02:53.392822 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6bf768964c-srxfg_a5afb146-31d7-4da9-8738-b6c15528233a/oauth-apiserver/0.log" Mar 08 22:02:53.504259 master-0 kubenswrapper[7480]: I0308 22:02:53.504019 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:53.504259 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:53.504259 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:53.504259 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:53.504259 master-0 kubenswrapper[7480]: I0308 22:02:53.504215 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:53.594260 master-0 kubenswrapper[7480]: I0308 22:02:53.594120 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bh88w_4382d186-34e4-40af-9b92-bb17ddcaa23f/etcd-operator/1.log" Mar 08 22:02:53.788108 master-0 kubenswrapper[7480]: I0308 22:02:53.787884 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bh88w_4382d186-34e4-40af-9b92-bb17ddcaa23f/etcd-operator/2.log" Mar 08 22:02:53.984903 master-0 kubenswrapper[7480]: I0308 22:02:53.984838 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/setup/0.log" Mar 08 22:02:54.188200 master-0 kubenswrapper[7480]: I0308 22:02:54.188128 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-ensure-env-vars/0.log" Mar 08 22:02:54.386017 master-0 kubenswrapper[7480]: I0308 22:02:54.385930 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-resources-copy/0.log" Mar 08 22:02:54.504009 master-0 kubenswrapper[7480]: I0308 22:02:54.503767 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:54.504009 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:54.504009 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:54.504009 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:54.504009 master-0 kubenswrapper[7480]: I0308 22:02:54.503883 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:54.586856 master-0 kubenswrapper[7480]: I0308 22:02:54.586766 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 08 22:02:54.795921 master-0 kubenswrapper[7480]: I0308 22:02:54.795725 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 08 22:02:54.991336 master-0 kubenswrapper[7480]: I0308 22:02:54.991280 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 22:02:55.185878 master-0 kubenswrapper[7480]: I0308 22:02:55.185828 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-readyz/0.log" Mar 08 22:02:55.387790 master-0 kubenswrapper[7480]: I0308 22:02:55.387643 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 22:02:55.505001 master-0 kubenswrapper[7480]: I0308 22:02:55.504731 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:55.505001 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:55.505001 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:55.505001 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:55.505624 master-0 kubenswrapper[7480]: I0308 22:02:55.504998 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:55.593857 master-0 kubenswrapper[7480]: I0308 22:02:55.593797 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_57a34dbc-eb6d-44f5-b1aa-4762b69382ed/installer/0.log" Mar 08 22:02:55.791452 master-0 kubenswrapper[7480]: I0308 22:02:55.791271 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-mww2c_04fb7bdb-fb5a-4187-94a3-67c8f09684ed/kube-apiserver-operator/0.log" Mar 08 22:02:55.988444 master-0 kubenswrapper[7480]: I0308 22:02:55.988368 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-mww2c_04fb7bdb-fb5a-4187-94a3-67c8f09684ed/kube-apiserver-operator/1.log" Mar 08 22:02:56.184144 master-0 kubenswrapper[7480]: I0308 22:02:56.184041 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 08 22:02:56.394861 master-0 kubenswrapper[7480]: I0308 22:02:56.394794 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 08 22:02:56.504455 master-0 kubenswrapper[7480]: I0308 22:02:56.504261 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:56.504455 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:56.504455 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:56.504455 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:56.504455 master-0 kubenswrapper[7480]: I0308 22:02:56.504378 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:56.587967 master-0 kubenswrapper[7480]: I0308 22:02:56.587882 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 08 22:02:56.789206 master-0 kubenswrapper[7480]: I0308 22:02:56.789035 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0/installer/0.log" Mar 08 22:02:56.991494 master-0 kubenswrapper[7480]: I0308 22:02:56.991421 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_78dc543f-66ed-4098-b5a9-699ec2ccc856/installer/0.log" Mar 08 22:02:57.193120 master-0 kubenswrapper[7480]: I0308 22:02:57.192954 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-8pqc2_b849f992-1020-4633-98be-75705b962fa9/kube-controller-manager-operator/1.log" Mar 08 22:02:57.387773 master-0 kubenswrapper[7480]: I0308 22:02:57.387705 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-8pqc2_b849f992-1020-4633-98be-75705b962fa9/kube-controller-manager-operator/2.log" Mar 08 22:02:57.504050 master-0 kubenswrapper[7480]: I0308 22:02:57.503880 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:57.504050 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:57.504050 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:57.504050 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:57.504050 master-0 kubenswrapper[7480]: I0308 22:02:57.503969 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:57.595795 master-0 kubenswrapper[7480]: I0308 22:02:57.595709 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/2.log" Mar 08 22:02:57.995010 master-0 kubenswrapper[7480]: I0308 22:02:57.994938 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/3.log" Mar 08 22:02:58.195114 master-0 kubenswrapper[7480]: I0308 22:02:58.194960 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/cluster-policy-controller/0.log" Mar 08 22:02:58.391457 master-0 kubenswrapper[7480]: I0308 22:02:58.391409 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 08 22:02:58.503544 master-0 kubenswrapper[7480]: I0308 22:02:58.503461 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:58.503544 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:58.503544 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:58.503544 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:58.503925 master-0 kubenswrapper[7480]: I0308 22:02:58.503581 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:58.596495 master-0 kubenswrapper[7480]: I0308 22:02:58.596446 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 08 22:02:58.792733 master-0 kubenswrapper[7480]: I0308 22:02:58.792542 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c633355a-b323-4458-8ecb-1e490d115f59/installer/0.log" Mar 08 22:02:58.994961 master-0 kubenswrapper[7480]: I0308 22:02:58.994870 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-2mspg_f6fbc12f-3c27-4a7a-933f-43a55c960335/kube-scheduler-operator-container/1.log" Mar 08 22:02:59.188781 master-0 kubenswrapper[7480]: I0308 22:02:59.188683 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-2mspg_f6fbc12f-3c27-4a7a-933f-43a55c960335/kube-scheduler-operator-container/2.log" Mar 08 22:02:59.393474 master-0 kubenswrapper[7480]: I0308 22:02:59.393362 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-nqz5k_a8e00c74-fb72-4e3d-a22c-c38a4772a813/openshift-apiserver-operator/1.log" Mar 08 22:02:59.503578 master-0 kubenswrapper[7480]: I0308 22:02:59.503365 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:02:59.503578 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:02:59.503578 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:02:59.503578 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:02:59.503578 master-0 kubenswrapper[7480]: I0308 22:02:59.503481 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:02:59.588609 master-0 kubenswrapper[7480]: I0308 22:02:59.588495 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-nqz5k_a8e00c74-fb72-4e3d-a22c-c38a4772a813/openshift-apiserver-operator/2.log" Mar 08 22:02:59.788426 master-0 kubenswrapper[7480]: I0308 22:02:59.788253 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6f9445b8fd-w44n6_ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/fix-audit-permissions/0.log" Mar 08 22:02:59.992310 master-0 kubenswrapper[7480]: I0308 22:02:59.992197 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6f9445b8fd-w44n6_ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/openshift-apiserver/0.log" Mar 08 22:03:00.211505 master-0 kubenswrapper[7480]: I0308 22:03:00.211264 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6f9445b8fd-w44n6_ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/openshift-apiserver-check-endpoints/0.log" Mar 08 22:03:00.390026 master-0 kubenswrapper[7480]: I0308 22:03:00.389860 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bh88w_4382d186-34e4-40af-9b92-bb17ddcaa23f/etcd-operator/1.log" Mar 08 22:03:00.503939 master-0 kubenswrapper[7480]: I0308 22:03:00.503860 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:00.503939 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:00.503939 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:00.503939 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:00.504453 master-0 kubenswrapper[7480]: I0308 22:03:00.503951 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:00.587670 master-0 kubenswrapper[7480]: I0308 22:03:00.587252 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bh88w_4382d186-34e4-40af-9b92-bb17ddcaa23f/etcd-operator/2.log" Mar 08 22:03:00.792146 master-0 kubenswrapper[7480]: I0308 22:03:00.791938 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-6q5t2_83b5f0b6-adee-4820-8212-b4d182b178d2/catalog-operator/0.log" Mar 08 22:03:00.994184 master-0 kubenswrapper[7480]: I0308 22:03:00.994099 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-xqh7x_3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/olm-operator/0.log" Mar 08 22:03:01.187236 master-0 kubenswrapper[7480]: I0308 22:03:01.187154 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-x5zxr_be431b74-1116-4b0f-8b25-bbb0408411b0/kube-rbac-proxy/0.log" Mar 08 22:03:01.396959 master-0 kubenswrapper[7480]: I0308 22:03:01.396874 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-x5zxr_be431b74-1116-4b0f-8b25-bbb0408411b0/package-server-manager/0.log" Mar 08 22:03:01.503640 master-0 kubenswrapper[7480]: I0308 22:03:01.503467 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:01.503640 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:01.503640 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:01.503640 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:01.503640 master-0 kubenswrapper[7480]: I0308 22:03:01.503585 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:01.593566 master-0 kubenswrapper[7480]: I0308 22:03:01.593516 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-f988cd549-68kmh_4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/packageserver/0.log" Mar 08 22:03:02.503836 master-0 kubenswrapper[7480]: I0308 22:03:02.503756 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:02.503836 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:02.503836 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:02.503836 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:02.504683 master-0 kubenswrapper[7480]: I0308 22:03:02.503854 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:03.503123 master-0 kubenswrapper[7480]: I0308 22:03:03.502996 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:03.503123 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:03.503123 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:03.503123 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:03.503453 master-0 kubenswrapper[7480]: I0308 22:03:03.503154 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:04.503768 master-0 kubenswrapper[7480]: I0308 22:03:04.503663 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:04.503768 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:04.503768 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:04.503768 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:04.504811 master-0 kubenswrapper[7480]: I0308 22:03:04.503766 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:05.503719 master-0 kubenswrapper[7480]: I0308 22:03:05.503605 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:05.503719 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:05.503719 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:05.503719 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:05.505193 master-0 kubenswrapper[7480]: I0308 22:03:05.503752 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:06.503736 master-0 kubenswrapper[7480]: I0308 22:03:06.503609 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:06.503736 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:06.503736 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:06.503736 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:06.504208 master-0 kubenswrapper[7480]: I0308 22:03:06.503778 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:07.504608 master-0 kubenswrapper[7480]: I0308 22:03:07.504498 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:07.504608 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:07.504608 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:07.504608 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:07.505343 master-0 kubenswrapper[7480]: I0308 22:03:07.504649 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:08.503944 master-0 kubenswrapper[7480]: I0308 22:03:08.503836 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:08.503944 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:08.503944 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:08.503944 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:08.504487 master-0 kubenswrapper[7480]: I0308 22:03:08.503955 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:09.503573 master-0 kubenswrapper[7480]: I0308 22:03:09.503463 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:09.503573 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:09.503573 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:09.503573 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:09.504708 master-0 kubenswrapper[7480]: I0308 22:03:09.503627 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:10.503653 master-0 kubenswrapper[7480]: I0308 22:03:10.503500 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:10.503653 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:10.503653 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:10.503653 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:10.503653 master-0 kubenswrapper[7480]: I0308 22:03:10.503586 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:10.919857 master-0 kubenswrapper[7480]: I0308 22:03:10.919745 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:03:10.925715 master-0 kubenswrapper[7480]: I0308 22:03:10.925610 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:03:11.503349 master-0 kubenswrapper[7480]: I0308 22:03:11.503230 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:11.503349 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:11.503349 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:11.503349 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:11.504183 master-0 kubenswrapper[7480]: I0308 22:03:11.503437 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:12.503742 master-0 kubenswrapper[7480]: I0308 22:03:12.503650 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:12.503742 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:12.503742 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:12.503742 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:12.503742 master-0 kubenswrapper[7480]: I0308 22:03:12.503716 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:13.503982 master-0 kubenswrapper[7480]: I0308 22:03:13.503905 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:13.503982 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:13.503982 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:13.503982 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:13.505252 master-0 kubenswrapper[7480]: I0308 22:03:13.504867 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:14.504985 master-0 kubenswrapper[7480]: I0308 22:03:14.504869 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:14.504985 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:14.504985 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:14.504985 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:14.505670 master-0 kubenswrapper[7480]: I0308 22:03:14.505009 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:15.504228 master-0 kubenswrapper[7480]: I0308 22:03:15.504142 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:15.504228 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:15.504228 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:15.504228 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:15.504867 master-0 kubenswrapper[7480]: I0308 22:03:15.504252 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:16.504427 master-0 kubenswrapper[7480]: I0308 22:03:16.504346 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:16.504427 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:16.504427 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:16.504427 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:16.505454 master-0 kubenswrapper[7480]: I0308 22:03:16.504440 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:17.504530 master-0 kubenswrapper[7480]: I0308 22:03:17.504442 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:17.504530 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:17.504530 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:17.504530 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:17.505545 master-0 kubenswrapper[7480]: I0308 22:03:17.504549 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:18.503946 master-0 kubenswrapper[7480]: I0308 22:03:18.503842 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:18.503946 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:18.503946 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:18.503946 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:18.504478 master-0 kubenswrapper[7480]: I0308 22:03:18.503969 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:19.503549 master-0 kubenswrapper[7480]: I0308 22:03:19.503407 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:19.503549 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:19.503549 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:19.503549 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:19.504399 master-0 kubenswrapper[7480]: I0308 22:03:19.503612 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:20.503811 master-0 kubenswrapper[7480]: I0308 22:03:20.503720 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:20.503811 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:20.503811 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:20.503811 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:20.504455 master-0 kubenswrapper[7480]: I0308 22:03:20.503841 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:21.502295 master-0 kubenswrapper[7480]: I0308 22:03:21.502169 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:21.502295 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:21.502295 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:21.502295 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:21.502631 master-0 kubenswrapper[7480]: I0308 22:03:21.502317 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:22.503580 master-0 kubenswrapper[7480]: I0308 22:03:22.503504 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:22.503580 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:22.503580 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:22.503580 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:22.504574 master-0 kubenswrapper[7480]: I0308 22:03:22.503590 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:23.503436 master-0 kubenswrapper[7480]: I0308 22:03:23.503336 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:23.503436 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:23.503436 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:23.503436 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:23.504318 master-0 kubenswrapper[7480]: I0308 22:03:23.503464 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:24.504100 master-0 kubenswrapper[7480]: I0308 22:03:24.503958 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:24.504100 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:24.504100 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:24.504100 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:24.505215 master-0 kubenswrapper[7480]: I0308 22:03:24.504122 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:25.504580 master-0 kubenswrapper[7480]: I0308 22:03:25.504487 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:25.504580 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:25.504580 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:25.504580 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:25.505851 master-0 kubenswrapper[7480]: I0308 22:03:25.504610 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:26.503000 master-0 kubenswrapper[7480]: I0308 22:03:26.502809 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:26.503000 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:26.503000 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:26.503000 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:26.503000 master-0 kubenswrapper[7480]: I0308 22:03:26.502903 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:27.503923 master-0 kubenswrapper[7480]: I0308 22:03:27.503821 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:27.503923 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:27.503923 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:27.503923 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:27.504727 master-0 kubenswrapper[7480]: I0308 22:03:27.503941 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:28.504543 master-0 kubenswrapper[7480]: I0308 22:03:28.504456 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:28.504543 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:28.504543 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:28.504543 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:28.505375 master-0 kubenswrapper[7480]: I0308 22:03:28.504587 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:29.504160 master-0 kubenswrapper[7480]: I0308 22:03:29.504000 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:29.504160 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:29.504160 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:29.504160 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:29.505250 master-0 kubenswrapper[7480]: I0308 22:03:29.504211 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:30.504965 master-0 kubenswrapper[7480]: I0308 22:03:30.504783 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:30.504965 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:30.504965 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:30.504965 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:30.504965 master-0 kubenswrapper[7480]: I0308 22:03:30.504895 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:31.504173 master-0 kubenswrapper[7480]: I0308 22:03:31.504021 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:31.504173 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:31.504173 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:31.504173 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:31.504632 master-0 kubenswrapper[7480]: I0308 22:03:31.504206 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:32.503624 master-0 kubenswrapper[7480]: I0308 22:03:32.503524 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:32.503624 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:32.503624 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:32.503624 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:32.504316 master-0 kubenswrapper[7480]: I0308 22:03:32.503642 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:33.502904 master-0 kubenswrapper[7480]: I0308 22:03:33.502797 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:33.502904 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:33.502904 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:33.502904 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:33.503552 master-0 kubenswrapper[7480]: I0308 22:03:33.502979 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:34.503587 master-0 kubenswrapper[7480]: I0308 22:03:34.503485 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:34.503587 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:34.503587 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:34.503587 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:34.504931 master-0 kubenswrapper[7480]: I0308 22:03:34.503708 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:35.504528 master-0 kubenswrapper[7480]: I0308 22:03:35.504426 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:35.504528 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:35.504528 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:35.504528 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:35.505550 master-0 kubenswrapper[7480]: I0308 22:03:35.504534 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:36.504027 master-0 kubenswrapper[7480]: I0308 22:03:36.503862 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:36.504027 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:36.504027 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:36.504027 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:36.504722 master-0 kubenswrapper[7480]: I0308 22:03:36.504670 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:37.505009 master-0 kubenswrapper[7480]: I0308 22:03:37.504857 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:37.505009 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:37.505009 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:37.505009 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:37.505009 master-0 kubenswrapper[7480]: I0308 22:03:37.504984 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:38.503564 master-0 kubenswrapper[7480]: I0308 22:03:38.503459 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:38.503564 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:38.503564 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:38.503564 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:38.504012 master-0 kubenswrapper[7480]: I0308 22:03:38.503585 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:39.504107 master-0 kubenswrapper[7480]: I0308 22:03:39.503983 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:39.504107 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:39.504107 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:39.504107 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:39.505184 master-0 kubenswrapper[7480]: I0308 22:03:39.504123 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:40.503616 master-0 kubenswrapper[7480]: I0308 22:03:40.503414 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:40.503616 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:40.503616 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:40.503616 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:40.503616 master-0 kubenswrapper[7480]: I0308 22:03:40.503583 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:40.964383 master-0 kubenswrapper[7480]: I0308 22:03:40.963778 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/1.log" Mar 08 22:03:40.967239 master-0 kubenswrapper[7480]: I0308 22:03:40.966056 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/0.log" Mar 08 22:03:40.967239 master-0 kubenswrapper[7480]: I0308 22:03:40.966323 7480 generic.go:334] "Generic (PLEG): container finished" podID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" containerID="799bcb818f10708811e14b095b41eda5205477d4badc6517a720213a0c436a29" exitCode=1 Mar 08 22:03:40.967239 master-0 kubenswrapper[7480]: I0308 22:03:40.966408 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerDied","Data":"799bcb818f10708811e14b095b41eda5205477d4badc6517a720213a0c436a29"} Mar 08 22:03:40.967239 master-0 kubenswrapper[7480]: I0308 22:03:40.966502 7480 scope.go:117] "RemoveContainer" containerID="1a0df161078208a525b4d1fb6d4ca6198700570b496ec5545cc3b9587304d8a5" Mar 08 22:03:40.967784 master-0 kubenswrapper[7480]: I0308 22:03:40.967681 7480 scope.go:117] "RemoveContainer" containerID="799bcb818f10708811e14b095b41eda5205477d4badc6517a720213a0c436a29" Mar 08 22:03:40.968662 master-0 kubenswrapper[7480]: E0308 22:03:40.968577 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:03:41.504030 master-0 kubenswrapper[7480]: I0308 22:03:41.503917 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:41.504030 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:41.504030 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:41.504030 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:41.505197 master-0 kubenswrapper[7480]: I0308 22:03:41.504113 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:41.979658 master-0 kubenswrapper[7480]: I0308 22:03:41.979590 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/1.log" Mar 08 22:03:42.504236 master-0 kubenswrapper[7480]: I0308 22:03:42.504157 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:42.504236 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:42.504236 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:42.504236 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:42.505397 master-0 kubenswrapper[7480]: I0308 22:03:42.504317 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:43.503716 master-0 kubenswrapper[7480]: I0308 22:03:43.503622 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:43.503716 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:43.503716 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:43.503716 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:43.504269 master-0 kubenswrapper[7480]: I0308 22:03:43.503716 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:44.503658 master-0 kubenswrapper[7480]: I0308 22:03:44.503580 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:44.503658 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:44.503658 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:44.503658 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:44.503658 master-0 kubenswrapper[7480]: I0308 22:03:44.503659 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:45.502953 master-0 kubenswrapper[7480]: I0308 22:03:45.502856 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:45.502953 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:45.502953 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:45.502953 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:45.502953 master-0 kubenswrapper[7480]: I0308 22:03:45.502936 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:46.504182 master-0 kubenswrapper[7480]: I0308 22:03:46.504018 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:46.504182 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:46.504182 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:46.504182 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:46.504182 master-0 kubenswrapper[7480]: I0308 22:03:46.504162 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:47.502848 master-0 kubenswrapper[7480]: I0308 22:03:47.502798 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:47.502848 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:47.502848 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:47.502848 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:47.503353 master-0 kubenswrapper[7480]: I0308 22:03:47.503321 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:48.503728 master-0 kubenswrapper[7480]: I0308 22:03:48.503662 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:48.503728 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:48.503728 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:48.503728 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:48.504803 master-0 kubenswrapper[7480]: I0308 22:03:48.504336 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:49.503719 master-0 kubenswrapper[7480]: I0308 22:03:49.503597 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:49.503719 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:49.503719 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:49.503719 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:49.503719 master-0 kubenswrapper[7480]: I0308 22:03:49.503706 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:50.504114 master-0 kubenswrapper[7480]: I0308 22:03:50.503859 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:50.504114 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:50.504114 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:50.504114 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:50.504114 master-0 kubenswrapper[7480]: I0308 22:03:50.503958 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:51.503710 master-0 kubenswrapper[7480]: I0308 22:03:51.503607 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:51.503710 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:51.503710 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:51.503710 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:51.505278 master-0 kubenswrapper[7480]: I0308 22:03:51.503726 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:52.503864 master-0 kubenswrapper[7480]: I0308 22:03:52.503758 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:52.503864 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:52.503864 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:52.503864 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:52.503864 master-0 kubenswrapper[7480]: I0308 22:03:52.503869 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:53.503135 master-0 kubenswrapper[7480]: I0308 22:03:53.502980 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:53.503135 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:53.503135 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:53.503135 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:53.503666 master-0 kubenswrapper[7480]: I0308 22:03:53.503140 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:54.504219 master-0 kubenswrapper[7480]: I0308 22:03:54.504148 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:54.504219 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:54.504219 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:54.504219 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:54.505393 master-0 kubenswrapper[7480]: I0308 22:03:54.505334 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:54.782737 master-0 kubenswrapper[7480]: I0308 22:03:54.782499 7480 scope.go:117] "RemoveContainer" containerID="799bcb818f10708811e14b095b41eda5205477d4badc6517a720213a0c436a29" Mar 08 22:03:55.098858 master-0 kubenswrapper[7480]: I0308 22:03:55.098758 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/1.log" Mar 08 22:03:55.099734 master-0 kubenswrapper[7480]: I0308 22:03:55.099622 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c"} Mar 08 22:03:55.504335 master-0 kubenswrapper[7480]: I0308 22:03:55.504149 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:55.504335 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:55.504335 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:55.504335 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:55.504335 master-0 kubenswrapper[7480]: I0308 22:03:55.504246 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:56.504656 master-0 kubenswrapper[7480]: I0308 22:03:56.504532 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:56.504656 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:56.504656 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:56.504656 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:56.505788 master-0 kubenswrapper[7480]: I0308 22:03:56.504706 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:57.503205 master-0 kubenswrapper[7480]: I0308 22:03:57.503119 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:57.503205 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:57.503205 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:57.503205 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:57.503931 master-0 kubenswrapper[7480]: I0308 22:03:57.503252 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:58.504106 master-0 kubenswrapper[7480]: I0308 22:03:58.503945 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:58.504106 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:58.504106 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:58.504106 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:58.505377 master-0 kubenswrapper[7480]: I0308 22:03:58.504130 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:03:59.504487 master-0 kubenswrapper[7480]: I0308 22:03:59.504379 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:03:59.504487 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:03:59.504487 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:03:59.504487 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:03:59.505703 master-0 kubenswrapper[7480]: I0308 22:03:59.504494 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:00.503949 master-0 kubenswrapper[7480]: I0308 22:04:00.503725 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:00.503949 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:00.503949 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:00.503949 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:00.503949 master-0 kubenswrapper[7480]: I0308 22:04:00.503851 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:01.503138 master-0 kubenswrapper[7480]: I0308 22:04:01.502960 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:01.503138 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:01.503138 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:01.503138 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:01.503138 master-0 kubenswrapper[7480]: I0308 22:04:01.503090 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:02.504735 master-0 kubenswrapper[7480]: I0308 22:04:02.504661 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:02.504735 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:02.504735 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:02.504735 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:02.505518 master-0 kubenswrapper[7480]: I0308 22:04:02.504774 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:03.507964 master-0 kubenswrapper[7480]: I0308 22:04:03.506662 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:03.507964 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:03.507964 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:03.507964 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:03.507964 master-0 kubenswrapper[7480]: I0308 22:04:03.506872 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:04.504685 master-0 kubenswrapper[7480]: I0308 22:04:04.504556 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:04.504685 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:04.504685 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:04.504685 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:04.505208 master-0 kubenswrapper[7480]: I0308 22:04:04.504700 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:05.503656 master-0 kubenswrapper[7480]: I0308 22:04:05.503559 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:05.503656 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:05.503656 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:05.503656 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:05.504550 master-0 kubenswrapper[7480]: I0308 22:04:05.503711 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:06.504591 master-0 kubenswrapper[7480]: I0308 22:04:06.504523 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:06.504591 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:06.504591 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:06.504591 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:06.505392 master-0 kubenswrapper[7480]: I0308 22:04:06.504606 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:07.505200 master-0 kubenswrapper[7480]: I0308 22:04:07.505116 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:07.505200 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:07.505200 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:07.505200 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:07.506254 master-0 kubenswrapper[7480]: I0308 22:04:07.505208 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:08.502943 master-0 kubenswrapper[7480]: I0308 22:04:08.502867 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:08.502943 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:08.502943 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:08.502943 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:08.503590 master-0 kubenswrapper[7480]: I0308 22:04:08.503543 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:09.503672 master-0 kubenswrapper[7480]: I0308 22:04:09.503593 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:09.503672 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:09.503672 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:09.503672 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:09.505010 master-0 kubenswrapper[7480]: I0308 22:04:09.503693 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:10.504170 master-0 kubenswrapper[7480]: I0308 22:04:10.503934 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:10.504170 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:10.504170 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:10.504170 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:10.504170 master-0 kubenswrapper[7480]: I0308 22:04:10.504048 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:11.503834 master-0 kubenswrapper[7480]: I0308 22:04:11.503714 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:11.503834 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:11.503834 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:11.503834 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:11.503834 master-0 kubenswrapper[7480]: I0308 22:04:11.503819 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:12.506111 master-0 kubenswrapper[7480]: I0308 22:04:12.506022 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:12.506111 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:12.506111 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:12.506111 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:12.507139 master-0 kubenswrapper[7480]: I0308 22:04:12.506118 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:13.503923 master-0 kubenswrapper[7480]: I0308 22:04:13.503819 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:13.503923 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:13.503923 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:13.503923 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:13.504317 master-0 kubenswrapper[7480]: I0308 22:04:13.503985 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:14.520929 master-0 kubenswrapper[7480]: I0308 22:04:14.520833 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:14.520929 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:14.520929 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:14.520929 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:14.521989 master-0 kubenswrapper[7480]: I0308 22:04:14.520955 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:15.504399 master-0 kubenswrapper[7480]: I0308 22:04:15.504326 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:15.504399 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:15.504399 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:15.504399 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:15.505595 master-0 kubenswrapper[7480]: I0308 22:04:15.504444 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:16.503841 master-0 kubenswrapper[7480]: I0308 22:04:16.503742 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:16.503841 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:16.503841 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:16.503841 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:16.505285 master-0 kubenswrapper[7480]: I0308 22:04:16.503861 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:17.504093 master-0 kubenswrapper[7480]: I0308 22:04:17.503997 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:17.504093 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:17.504093 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:17.504093 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:17.505215 master-0 kubenswrapper[7480]: I0308 22:04:17.505166 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:18.504273 master-0 kubenswrapper[7480]: I0308 22:04:18.504198 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:04:18.504273 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:04:18.504273 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:04:18.504273 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:04:18.504910 master-0 kubenswrapper[7480]: I0308 22:04:18.504319 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:04:18.504910 master-0 kubenswrapper[7480]: I0308 22:04:18.504414 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:04:18.505624 master-0 kubenswrapper[7480]: I0308 22:04:18.505578 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"043bea0bfcad80d082009c992d1913377d82e97e1ea5f2b55356dd0fdc8a2c8f"} pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" containerMessage="Container router failed startup probe, will be restarted" Mar 08 22:04:18.505700 master-0 kubenswrapper[7480]: I0308 22:04:18.505667 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" containerID="cri-o://043bea0bfcad80d082009c992d1913377d82e97e1ea5f2b55356dd0fdc8a2c8f" gracePeriod=3600 Mar 08 22:04:36.464695 master-0 kubenswrapper[7480]: I0308 22:04:36.464613 7480 patch_prober.go:28] interesting pod/machine-config-daemon-q669r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 22:04:36.465496 master-0 kubenswrapper[7480]: I0308 22:04:36.464701 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q669r" podUID="7868a4fb-af89-4bdc-b41b-31f4ee59b5f3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 22:05:04.651221 master-0 kubenswrapper[7480]: I0308 22:05:04.651166 7480 generic.go:334] "Generic (PLEG): container finished" podID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerID="043bea0bfcad80d082009c992d1913377d82e97e1ea5f2b55356dd0fdc8a2c8f" exitCode=0 Mar 08 22:05:04.651814 master-0 kubenswrapper[7480]: I0308 22:05:04.651297 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerDied","Data":"043bea0bfcad80d082009c992d1913377d82e97e1ea5f2b55356dd0fdc8a2c8f"} Mar 08 22:05:05.669917 master-0 kubenswrapper[7480]: I0308 22:05:05.669834 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"8e67a6a8195a1bf0907601fa19ffa597a648c56ee5160c3ec3e81c5ecf98df23"} Mar 08 22:05:06.501324 master-0 kubenswrapper[7480]: I0308 22:05:06.501199 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:05:06.505824 master-0 kubenswrapper[7480]: I0308 22:05:06.505744 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:06.505824 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:06.505824 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:06.505824 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:06.506309 master-0 kubenswrapper[7480]: I0308 22:05:06.505828 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:07.505538 master-0 kubenswrapper[7480]: I0308 22:05:07.505401 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:07.505538 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:07.505538 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:07.505538 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:07.506728 master-0 kubenswrapper[7480]: I0308 22:05:07.505544 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:08.505171 master-0 kubenswrapper[7480]: I0308 22:05:08.505040 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:08.505171 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:08.505171 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:08.505171 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:08.506049 master-0 kubenswrapper[7480]: I0308 22:05:08.505202 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:09.504305 master-0 kubenswrapper[7480]: I0308 22:05:09.504183 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:09.504305 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:09.504305 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:09.504305 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:09.504940 master-0 kubenswrapper[7480]: I0308 22:05:09.504325 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:10.504593 master-0 kubenswrapper[7480]: I0308 22:05:10.504429 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:10.504593 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:10.504593 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:10.504593 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:10.504593 master-0 kubenswrapper[7480]: I0308 22:05:10.504536 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:11.503587 master-0 kubenswrapper[7480]: I0308 22:05:11.503521 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:11.503587 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:11.503587 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:11.503587 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:11.504369 master-0 kubenswrapper[7480]: I0308 22:05:11.504249 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:12.504794 master-0 kubenswrapper[7480]: I0308 22:05:12.504683 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:12.504794 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:12.504794 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:12.504794 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:12.505958 master-0 kubenswrapper[7480]: I0308 22:05:12.504897 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:13.504056 master-0 kubenswrapper[7480]: I0308 22:05:13.503959 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:13.504056 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:13.504056 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:13.504056 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:13.504578 master-0 kubenswrapper[7480]: I0308 22:05:13.504161 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:14.503680 master-0 kubenswrapper[7480]: I0308 22:05:14.503579 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:14.503680 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:14.503680 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:14.503680 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:14.504314 master-0 kubenswrapper[7480]: I0308 22:05:14.503729 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:15.501224 master-0 kubenswrapper[7480]: I0308 22:05:15.501043 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:05:15.504586 master-0 kubenswrapper[7480]: I0308 22:05:15.504527 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:15.504586 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:15.504586 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:15.504586 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:15.505694 master-0 kubenswrapper[7480]: I0308 22:05:15.504596 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:16.503442 master-0 kubenswrapper[7480]: I0308 22:05:16.503361 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:16.503442 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:16.503442 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:16.503442 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:16.503753 master-0 kubenswrapper[7480]: I0308 22:05:16.503455 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:17.508118 master-0 kubenswrapper[7480]: I0308 22:05:17.507961 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:17.508118 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:17.508118 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:17.508118 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:17.509557 master-0 kubenswrapper[7480]: I0308 22:05:17.508154 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:18.502987 master-0 kubenswrapper[7480]: I0308 22:05:18.502848 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:18.502987 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:18.502987 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:18.502987 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:18.503504 master-0 kubenswrapper[7480]: I0308 22:05:18.502997 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:19.504014 master-0 kubenswrapper[7480]: I0308 22:05:19.503925 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:19.504014 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:19.504014 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:19.504014 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:19.504907 master-0 kubenswrapper[7480]: I0308 22:05:19.504049 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:20.503796 master-0 kubenswrapper[7480]: I0308 22:05:20.503621 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:20.503796 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:20.503796 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:20.503796 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:20.503796 master-0 kubenswrapper[7480]: I0308 22:05:20.503699 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:21.503569 master-0 kubenswrapper[7480]: I0308 22:05:21.503040 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:21.503569 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:21.503569 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:21.503569 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:21.504115 master-0 kubenswrapper[7480]: I0308 22:05:21.503613 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:22.504850 master-0 kubenswrapper[7480]: I0308 22:05:22.504725 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:22.504850 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:22.504850 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:22.504850 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:22.506629 master-0 kubenswrapper[7480]: I0308 22:05:22.504910 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:23.503840 master-0 kubenswrapper[7480]: I0308 22:05:23.503738 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:23.503840 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:23.503840 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:23.503840 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:23.504645 master-0 kubenswrapper[7480]: I0308 22:05:23.503850 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:24.503789 master-0 kubenswrapper[7480]: I0308 22:05:24.503634 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:24.503789 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:24.503789 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:24.503789 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:24.503789 master-0 kubenswrapper[7480]: I0308 22:05:24.503792 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:26.200552 master-0 kubenswrapper[7480]: I0308 22:05:26.199010 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:26.200552 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:26.200552 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:26.200552 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:26.200552 master-0 kubenswrapper[7480]: I0308 22:05:26.199148 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:26.503866 master-0 kubenswrapper[7480]: I0308 22:05:26.503648 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:26.503866 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:26.503866 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:26.503866 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:26.503866 master-0 kubenswrapper[7480]: I0308 22:05:26.503785 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:27.503816 master-0 kubenswrapper[7480]: I0308 22:05:27.503675 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:27.503816 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:27.503816 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:27.503816 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:27.504921 master-0 kubenswrapper[7480]: I0308 22:05:27.503820 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:28.504446 master-0 kubenswrapper[7480]: I0308 22:05:28.504322 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:28.504446 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:28.504446 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:28.504446 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:28.505597 master-0 kubenswrapper[7480]: I0308 22:05:28.504463 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:29.505206 master-0 kubenswrapper[7480]: I0308 22:05:29.505062 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:29.505206 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:29.505206 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:29.505206 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:29.506057 master-0 kubenswrapper[7480]: I0308 22:05:29.505247 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:30.504701 master-0 kubenswrapper[7480]: I0308 22:05:30.504471 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:30.504701 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:30.504701 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:30.504701 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:30.504701 master-0 kubenswrapper[7480]: I0308 22:05:30.504565 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:31.503346 master-0 kubenswrapper[7480]: I0308 22:05:31.503139 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:31.503346 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:31.503346 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:31.503346 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:31.503346 master-0 kubenswrapper[7480]: I0308 22:05:31.503258 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:32.504142 master-0 kubenswrapper[7480]: I0308 22:05:32.504032 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:32.504142 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:32.504142 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:32.504142 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:32.505251 master-0 kubenswrapper[7480]: I0308 22:05:32.504184 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:33.505250 master-0 kubenswrapper[7480]: I0308 22:05:33.505159 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:33.505250 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:33.505250 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:33.505250 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:33.506963 master-0 kubenswrapper[7480]: I0308 22:05:33.505283 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:34.503998 master-0 kubenswrapper[7480]: I0308 22:05:34.503903 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:34.503998 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:34.503998 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:34.503998 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:34.503998 master-0 kubenswrapper[7480]: I0308 22:05:34.504002 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:35.503503 master-0 kubenswrapper[7480]: I0308 22:05:35.503410 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:35.503503 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:35.503503 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:35.503503 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:35.504671 master-0 kubenswrapper[7480]: I0308 22:05:35.503517 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:36.504769 master-0 kubenswrapper[7480]: I0308 22:05:36.504701 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:36.504769 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:36.504769 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:36.504769 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:36.505952 master-0 kubenswrapper[7480]: I0308 22:05:36.505899 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:37.504704 master-0 kubenswrapper[7480]: I0308 22:05:37.504619 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:37.504704 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:37.504704 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:37.504704 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:37.516805 master-0 kubenswrapper[7480]: I0308 22:05:37.504735 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:38.504484 master-0 kubenswrapper[7480]: I0308 22:05:38.504356 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:38.504484 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:38.504484 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:38.504484 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:38.505011 master-0 kubenswrapper[7480]: I0308 22:05:38.504507 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:39.503923 master-0 kubenswrapper[7480]: I0308 22:05:39.503835 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:39.503923 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:39.503923 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:39.503923 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:39.504789 master-0 kubenswrapper[7480]: I0308 22:05:39.503957 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:40.503728 master-0 kubenswrapper[7480]: I0308 22:05:40.503495 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:40.503728 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:40.503728 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:40.503728 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:40.503728 master-0 kubenswrapper[7480]: I0308 22:05:40.503696 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:41.503284 master-0 kubenswrapper[7480]: I0308 22:05:41.503166 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:41.503284 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:41.503284 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:41.503284 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:41.503928 master-0 kubenswrapper[7480]: I0308 22:05:41.503287 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:42.504230 master-0 kubenswrapper[7480]: I0308 22:05:42.504123 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:42.504230 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:42.504230 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:42.504230 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:42.504932 master-0 kubenswrapper[7480]: I0308 22:05:42.504250 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:43.504552 master-0 kubenswrapper[7480]: I0308 22:05:43.504453 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:43.504552 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:43.504552 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:43.504552 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:43.505810 master-0 kubenswrapper[7480]: I0308 22:05:43.504579 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:44.503709 master-0 kubenswrapper[7480]: I0308 22:05:44.503614 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:44.503709 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:44.503709 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:44.503709 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:44.504314 master-0 kubenswrapper[7480]: I0308 22:05:44.503712 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:45.503248 master-0 kubenswrapper[7480]: I0308 22:05:45.503175 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:45.503248 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:45.503248 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:45.503248 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:45.503248 master-0 kubenswrapper[7480]: I0308 22:05:45.503230 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:46.503509 master-0 kubenswrapper[7480]: I0308 22:05:46.503399 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:46.503509 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:46.503509 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:46.503509 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:46.504514 master-0 kubenswrapper[7480]: I0308 22:05:46.503524 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:47.504245 master-0 kubenswrapper[7480]: I0308 22:05:47.504144 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:47.504245 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:47.504245 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:47.504245 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:47.505559 master-0 kubenswrapper[7480]: I0308 22:05:47.504249 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:48.503670 master-0 kubenswrapper[7480]: I0308 22:05:48.503569 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:48.503670 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:48.503670 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:48.503670 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:48.504160 master-0 kubenswrapper[7480]: I0308 22:05:48.503701 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:49.503834 master-0 kubenswrapper[7480]: I0308 22:05:49.503776 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:49.503834 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:49.503834 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:49.503834 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:49.504702 master-0 kubenswrapper[7480]: I0308 22:05:49.504666 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:50.503641 master-0 kubenswrapper[7480]: I0308 22:05:50.503438 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:50.503641 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:50.503641 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:50.503641 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:50.503641 master-0 kubenswrapper[7480]: I0308 22:05:50.503534 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:51.510313 master-0 kubenswrapper[7480]: I0308 22:05:51.510071 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:51.510313 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:51.510313 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:51.510313 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:51.511218 master-0 kubenswrapper[7480]: I0308 22:05:51.510597 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:52.503932 master-0 kubenswrapper[7480]: I0308 22:05:52.503825 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:52.503932 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:52.503932 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:52.503932 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:52.504491 master-0 kubenswrapper[7480]: I0308 22:05:52.503957 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:53.504178 master-0 kubenswrapper[7480]: I0308 22:05:53.504097 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:53.504178 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:53.504178 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:53.504178 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:53.505490 master-0 kubenswrapper[7480]: I0308 22:05:53.504186 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:54.504348 master-0 kubenswrapper[7480]: I0308 22:05:54.504255 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:54.504348 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:54.504348 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:54.504348 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:54.505420 master-0 kubenswrapper[7480]: I0308 22:05:54.504356 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:55.504428 master-0 kubenswrapper[7480]: I0308 22:05:55.504324 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:55.504428 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:55.504428 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:55.504428 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:55.505680 master-0 kubenswrapper[7480]: I0308 22:05:55.504454 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:56.451256 master-0 kubenswrapper[7480]: I0308 22:05:56.451179 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/2.log" Mar 08 22:05:56.452586 master-0 kubenswrapper[7480]: I0308 22:05:56.452546 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/1.log" Mar 08 22:05:56.453574 master-0 kubenswrapper[7480]: I0308 22:05:56.453506 7480 generic.go:334] "Generic (PLEG): container finished" podID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" containerID="1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c" exitCode=1 Mar 08 22:05:56.453786 master-0 kubenswrapper[7480]: I0308 22:05:56.453739 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerDied","Data":"1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c"} Mar 08 22:05:56.453984 master-0 kubenswrapper[7480]: I0308 22:05:56.453955 7480 scope.go:117] "RemoveContainer" containerID="799bcb818f10708811e14b095b41eda5205477d4badc6517a720213a0c436a29" Mar 08 22:05:56.455068 master-0 kubenswrapper[7480]: I0308 22:05:56.455011 7480 scope.go:117] "RemoveContainer" containerID="1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c" Mar 08 22:05:56.455577 master-0 kubenswrapper[7480]: E0308 22:05:56.455511 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:05:56.503237 master-0 kubenswrapper[7480]: I0308 22:05:56.503146 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:56.503237 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:56.503237 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:56.503237 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:56.503646 master-0 kubenswrapper[7480]: I0308 22:05:56.503238 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:57.466821 master-0 kubenswrapper[7480]: I0308 22:05:57.466766 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/2.log" Mar 08 22:05:57.503855 master-0 kubenswrapper[7480]: I0308 22:05:57.503766 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:57.503855 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:57.503855 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:57.503855 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:57.504418 master-0 kubenswrapper[7480]: I0308 22:05:57.503868 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:58.503029 master-0 kubenswrapper[7480]: I0308 22:05:58.502928 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:58.503029 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:58.503029 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:58.503029 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:58.503029 master-0 kubenswrapper[7480]: I0308 22:05:58.503007 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:05:59.503538 master-0 kubenswrapper[7480]: I0308 22:05:59.503433 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:05:59.503538 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:05:59.503538 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:05:59.503538 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:05:59.503538 master-0 kubenswrapper[7480]: I0308 22:05:59.503543 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:00.503501 master-0 kubenswrapper[7480]: I0308 22:06:00.503278 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:00.503501 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:00.503501 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:00.503501 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:00.503501 master-0 kubenswrapper[7480]: I0308 22:06:00.503422 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:01.520122 master-0 kubenswrapper[7480]: I0308 22:06:01.519362 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:01.520122 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:01.520122 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:01.520122 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:01.520122 master-0 kubenswrapper[7480]: I0308 22:06:01.519439 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:02.503379 master-0 kubenswrapper[7480]: I0308 22:06:02.503276 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:02.503379 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:02.503379 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:02.503379 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:02.503379 master-0 kubenswrapper[7480]: I0308 22:06:02.503372 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:03.503126 master-0 kubenswrapper[7480]: I0308 22:06:03.503047 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:03.503126 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:03.503126 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:03.503126 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:03.503978 master-0 kubenswrapper[7480]: I0308 22:06:03.503160 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:04.505886 master-0 kubenswrapper[7480]: I0308 22:06:04.504898 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:04.505886 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:04.505886 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:04.505886 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:04.505886 master-0 kubenswrapper[7480]: I0308 22:06:04.505012 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:05.504127 master-0 kubenswrapper[7480]: I0308 22:06:05.504007 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:05.504127 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:05.504127 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:05.504127 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:05.505041 master-0 kubenswrapper[7480]: I0308 22:06:05.504988 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:06.503753 master-0 kubenswrapper[7480]: I0308 22:06:06.503663 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:06.503753 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:06.503753 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:06.503753 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:06.503753 master-0 kubenswrapper[7480]: I0308 22:06:06.503751 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:07.504564 master-0 kubenswrapper[7480]: I0308 22:06:07.504485 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:07.504564 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:07.504564 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:07.504564 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:07.505363 master-0 kubenswrapper[7480]: I0308 22:06:07.504589 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:07.781792 master-0 kubenswrapper[7480]: I0308 22:06:07.781648 7480 scope.go:117] "RemoveContainer" containerID="1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c" Mar 08 22:06:07.782007 master-0 kubenswrapper[7480]: E0308 22:06:07.781929 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:06:08.503645 master-0 kubenswrapper[7480]: I0308 22:06:08.503524 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:08.503645 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:08.503645 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:08.503645 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:08.503645 master-0 kubenswrapper[7480]: I0308 22:06:08.503619 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:09.503498 master-0 kubenswrapper[7480]: I0308 22:06:09.503431 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:09.503498 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:09.503498 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:09.503498 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:09.504511 master-0 kubenswrapper[7480]: I0308 22:06:09.503521 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:10.503514 master-0 kubenswrapper[7480]: I0308 22:06:10.503345 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:10.503514 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:10.503514 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:10.503514 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:10.503514 master-0 kubenswrapper[7480]: I0308 22:06:10.503446 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:11.503632 master-0 kubenswrapper[7480]: I0308 22:06:11.503548 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:11.503632 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:11.503632 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:11.503632 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:11.504858 master-0 kubenswrapper[7480]: I0308 22:06:11.503682 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:12.503568 master-0 kubenswrapper[7480]: I0308 22:06:12.503478 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:12.503568 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:12.503568 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:12.503568 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:12.504719 master-0 kubenswrapper[7480]: I0308 22:06:12.503581 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:13.507371 master-0 kubenswrapper[7480]: I0308 22:06:13.507238 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:13.507371 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:13.507371 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:13.507371 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:13.507371 master-0 kubenswrapper[7480]: I0308 22:06:13.507337 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:14.503954 master-0 kubenswrapper[7480]: I0308 22:06:14.503826 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:14.503954 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:14.503954 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:14.503954 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:14.503954 master-0 kubenswrapper[7480]: I0308 22:06:14.503945 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:15.504626 master-0 kubenswrapper[7480]: I0308 22:06:15.504544 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:15.504626 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:15.504626 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:15.504626 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:15.505777 master-0 kubenswrapper[7480]: I0308 22:06:15.504656 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:16.503984 master-0 kubenswrapper[7480]: I0308 22:06:16.503872 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:16.503984 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:16.503984 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:16.503984 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:16.503984 master-0 kubenswrapper[7480]: I0308 22:06:16.503978 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:17.503654 master-0 kubenswrapper[7480]: I0308 22:06:17.503560 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:17.503654 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:17.503654 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:17.503654 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:17.505058 master-0 kubenswrapper[7480]: I0308 22:06:17.503662 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:18.503034 master-0 kubenswrapper[7480]: I0308 22:06:18.502905 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:18.503034 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:18.503034 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:18.503034 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:18.503034 master-0 kubenswrapper[7480]: I0308 22:06:18.502999 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:19.503759 master-0 kubenswrapper[7480]: I0308 22:06:19.503651 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:19.503759 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:19.503759 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:19.503759 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:19.504427 master-0 kubenswrapper[7480]: I0308 22:06:19.503764 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:20.504270 master-0 kubenswrapper[7480]: I0308 22:06:20.504120 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:20.504270 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:20.504270 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:20.504270 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:20.505354 master-0 kubenswrapper[7480]: I0308 22:06:20.504297 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:20.781863 master-0 kubenswrapper[7480]: I0308 22:06:20.781765 7480 scope.go:117] "RemoveContainer" containerID="1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c" Mar 08 22:06:21.504847 master-0 kubenswrapper[7480]: I0308 22:06:21.504632 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:21.504847 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:21.504847 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:21.504847 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:21.504847 master-0 kubenswrapper[7480]: I0308 22:06:21.504740 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:21.707608 master-0 kubenswrapper[7480]: I0308 22:06:21.707508 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/2.log" Mar 08 22:06:21.708233 master-0 kubenswrapper[7480]: I0308 22:06:21.708188 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5"} Mar 08 22:06:22.504214 master-0 kubenswrapper[7480]: I0308 22:06:22.504131 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:22.504214 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:22.504214 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:22.504214 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:22.504708 master-0 kubenswrapper[7480]: I0308 22:06:22.504247 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:23.503318 master-0 kubenswrapper[7480]: I0308 22:06:23.503236 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:23.503318 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:23.503318 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:23.503318 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:23.504509 master-0 kubenswrapper[7480]: I0308 22:06:23.503342 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:24.503781 master-0 kubenswrapper[7480]: I0308 22:06:24.503660 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:24.503781 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:24.503781 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:24.503781 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:24.504955 master-0 kubenswrapper[7480]: I0308 22:06:24.503795 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:25.541299 master-0 kubenswrapper[7480]: I0308 22:06:25.541212 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:25.541299 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:25.541299 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:25.541299 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:25.542469 master-0 kubenswrapper[7480]: I0308 22:06:25.541304 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:26.504225 master-0 kubenswrapper[7480]: I0308 22:06:26.504148 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:26.504225 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:26.504225 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:26.504225 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:26.504762 master-0 kubenswrapper[7480]: I0308 22:06:26.504269 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:27.504099 master-0 kubenswrapper[7480]: I0308 22:06:27.504005 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:27.504099 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:27.504099 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:27.504099 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:27.504783 master-0 kubenswrapper[7480]: I0308 22:06:27.504129 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:28.504064 master-0 kubenswrapper[7480]: I0308 22:06:28.503941 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:28.504064 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:28.504064 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:28.504064 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:28.505173 master-0 kubenswrapper[7480]: I0308 22:06:28.504106 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:29.504473 master-0 kubenswrapper[7480]: I0308 22:06:29.504383 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:29.504473 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:29.504473 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:29.504473 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:29.505705 master-0 kubenswrapper[7480]: I0308 22:06:29.504498 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:30.503983 master-0 kubenswrapper[7480]: I0308 22:06:30.503883 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:30.503983 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:30.503983 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:30.503983 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:30.504486 master-0 kubenswrapper[7480]: I0308 22:06:30.503997 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:31.503941 master-0 kubenswrapper[7480]: I0308 22:06:31.503887 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:31.503941 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:31.503941 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:31.503941 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:31.505045 master-0 kubenswrapper[7480]: I0308 22:06:31.503953 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:32.503175 master-0 kubenswrapper[7480]: I0308 22:06:32.503059 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:32.503175 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:32.503175 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:32.503175 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:32.503763 master-0 kubenswrapper[7480]: I0308 22:06:32.503703 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:33.504166 master-0 kubenswrapper[7480]: I0308 22:06:33.504043 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:33.504166 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:33.504166 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:33.504166 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:33.505408 master-0 kubenswrapper[7480]: I0308 22:06:33.504205 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:34.504603 master-0 kubenswrapper[7480]: I0308 22:06:34.504523 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:34.504603 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:34.504603 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:34.504603 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:34.504603 master-0 kubenswrapper[7480]: I0308 22:06:34.504622 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:35.039703 master-0 kubenswrapper[7480]: I0308 22:06:35.039596 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 08 22:06:35.041464 master-0 kubenswrapper[7480]: I0308 22:06:35.041417 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.047045 master-0 kubenswrapper[7480]: I0308 22:06:35.046973 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v7cvh" Mar 08 22:06:35.047443 master-0 kubenswrapper[7480]: I0308 22:06:35.047377 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 22:06:35.057204 master-0 kubenswrapper[7480]: I0308 22:06:35.056934 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 08 22:06:35.156661 master-0 kubenswrapper[7480]: I0308 22:06:35.156539 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.156661 master-0 kubenswrapper[7480]: I0308 22:06:35.156663 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a90a446-01fc-4032-9d02-d82e25084ea9-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.157227 master-0 kubenswrapper[7480]: I0308 22:06:35.156712 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.259214 master-0 kubenswrapper[7480]: I0308 22:06:35.259065 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.259548 master-0 kubenswrapper[7480]: I0308 22:06:35.259306 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.259548 master-0 kubenswrapper[7480]: I0308 22:06:35.259386 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a90a446-01fc-4032-9d02-d82e25084ea9-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.259548 master-0 kubenswrapper[7480]: I0308 22:06:35.259439 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.259810 master-0 kubenswrapper[7480]: I0308 22:06:35.259715 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.291593 master-0 kubenswrapper[7480]: I0308 22:06:35.291433 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a90a446-01fc-4032-9d02-d82e25084ea9-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.382675 master-0 kubenswrapper[7480]: I0308 22:06:35.382556 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:06:35.504965 master-0 kubenswrapper[7480]: I0308 22:06:35.504863 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:35.504965 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:35.504965 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:35.504965 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:35.504965 master-0 kubenswrapper[7480]: I0308 22:06:35.504964 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:35.713147 master-0 kubenswrapper[7480]: I0308 22:06:35.713049 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 08 22:06:35.717041 master-0 kubenswrapper[7480]: W0308 22:06:35.716946 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5a90a446_01fc_4032_9d02_d82e25084ea9.slice/crio-9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c WatchSource:0}: Error finding container 9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c: Status 404 returned error can't find the container with id 9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c Mar 08 22:06:35.831227 master-0 kubenswrapper[7480]: I0308 22:06:35.831151 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"5a90a446-01fc-4032-9d02-d82e25084ea9","Type":"ContainerStarted","Data":"9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c"} Mar 08 22:06:36.503931 master-0 kubenswrapper[7480]: I0308 22:06:36.503835 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:36.503931 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:36.503931 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:36.503931 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:36.504445 master-0 kubenswrapper[7480]: I0308 22:06:36.503953 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:36.848430 master-0 kubenswrapper[7480]: I0308 22:06:36.848357 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"5a90a446-01fc-4032-9d02-d82e25084ea9","Type":"ContainerStarted","Data":"3eb560de291b5a27e85796d034a6bc8bf292b3b1a9fe462699eef23cc0bb8a73"} Mar 08 22:06:36.873705 master-0 kubenswrapper[7480]: I0308 22:06:36.873584 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podStartSLOduration=1.873551955 podStartE2EDuration="1.873551955s" podCreationTimestamp="2026-03-08 22:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:06:36.872968411 +0000 UTC m=+547.326589113" watchObservedRunningTime="2026-03-08 22:06:36.873551955 +0000 UTC m=+547.327172597" Mar 08 22:06:37.502917 master-0 kubenswrapper[7480]: I0308 22:06:37.502821 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:37.502917 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:37.502917 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:37.502917 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:37.503436 master-0 kubenswrapper[7480]: I0308 22:06:37.502922 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:38.504498 master-0 kubenswrapper[7480]: I0308 22:06:38.504405 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:38.504498 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:38.504498 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:38.504498 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:38.505564 master-0 kubenswrapper[7480]: I0308 22:06:38.504526 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:39.504157 master-0 kubenswrapper[7480]: I0308 22:06:39.504030 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:39.504157 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:39.504157 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:39.504157 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:39.505501 master-0 kubenswrapper[7480]: I0308 22:06:39.504174 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:40.503773 master-0 kubenswrapper[7480]: I0308 22:06:40.503662 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:40.503773 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:40.503773 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:40.503773 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:40.504567 master-0 kubenswrapper[7480]: I0308 22:06:40.503784 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:41.503681 master-0 kubenswrapper[7480]: I0308 22:06:41.503597 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:41.503681 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:41.503681 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:41.503681 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:41.504215 master-0 kubenswrapper[7480]: I0308 22:06:41.503710 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:42.505485 master-0 kubenswrapper[7480]: I0308 22:06:42.505407 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:42.505485 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:42.505485 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:42.505485 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:42.506640 master-0 kubenswrapper[7480]: I0308 22:06:42.505516 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:43.503353 master-0 kubenswrapper[7480]: I0308 22:06:43.503271 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:43.503353 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:43.503353 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:43.503353 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:43.503814 master-0 kubenswrapper[7480]: I0308 22:06:43.503366 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:44.503781 master-0 kubenswrapper[7480]: I0308 22:06:44.503679 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:44.503781 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:44.503781 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:44.503781 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:44.504886 master-0 kubenswrapper[7480]: I0308 22:06:44.503779 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:45.504494 master-0 kubenswrapper[7480]: I0308 22:06:45.504376 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:45.504494 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:45.504494 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:45.504494 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:45.504494 master-0 kubenswrapper[7480]: I0308 22:06:45.504482 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:46.504345 master-0 kubenswrapper[7480]: I0308 22:06:46.504271 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:46.504345 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:46.504345 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:46.504345 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:46.505734 master-0 kubenswrapper[7480]: I0308 22:06:46.504364 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:47.503736 master-0 kubenswrapper[7480]: I0308 22:06:47.503647 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:47.503736 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:47.503736 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:47.503736 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:47.504139 master-0 kubenswrapper[7480]: I0308 22:06:47.503744 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:48.504365 master-0 kubenswrapper[7480]: I0308 22:06:48.504266 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:48.504365 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:48.504365 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:48.504365 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:48.505506 master-0 kubenswrapper[7480]: I0308 22:06:48.504374 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:49.503761 master-0 kubenswrapper[7480]: I0308 22:06:49.503668 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:49.503761 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:49.503761 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:49.503761 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:49.504298 master-0 kubenswrapper[7480]: I0308 22:06:49.503767 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:50.503800 master-0 kubenswrapper[7480]: I0308 22:06:50.503720 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:50.503800 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:50.503800 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:50.503800 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:50.504698 master-0 kubenswrapper[7480]: I0308 22:06:50.503824 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:51.503560 master-0 kubenswrapper[7480]: I0308 22:06:51.503480 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:51.503560 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:51.503560 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:51.503560 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:51.504755 master-0 kubenswrapper[7480]: I0308 22:06:51.503587 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:52.506448 master-0 kubenswrapper[7480]: I0308 22:06:52.506340 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:52.506448 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:52.506448 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:52.506448 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:52.507741 master-0 kubenswrapper[7480]: I0308 22:06:52.506476 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:53.504408 master-0 kubenswrapper[7480]: I0308 22:06:53.504312 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:53.504408 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:53.504408 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:53.504408 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:53.505034 master-0 kubenswrapper[7480]: I0308 22:06:53.504420 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:54.504461 master-0 kubenswrapper[7480]: I0308 22:06:54.504356 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:54.504461 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:54.504461 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:54.504461 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:54.505444 master-0 kubenswrapper[7480]: I0308 22:06:54.504465 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:55.503760 master-0 kubenswrapper[7480]: I0308 22:06:55.503694 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:55.503760 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:55.503760 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:55.503760 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:55.504494 master-0 kubenswrapper[7480]: I0308 22:06:55.504448 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:56.503577 master-0 kubenswrapper[7480]: I0308 22:06:56.503476 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:56.503577 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:56.503577 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:56.503577 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:56.504104 master-0 kubenswrapper[7480]: I0308 22:06:56.503597 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:57.503635 master-0 kubenswrapper[7480]: I0308 22:06:57.503553 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:57.503635 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:57.503635 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:57.503635 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:57.503635 master-0 kubenswrapper[7480]: I0308 22:06:57.503623 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:58.503268 master-0 kubenswrapper[7480]: I0308 22:06:58.503179 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:58.503268 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:58.503268 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:58.503268 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:58.504235 master-0 kubenswrapper[7480]: I0308 22:06:58.503291 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:06:59.503414 master-0 kubenswrapper[7480]: I0308 22:06:59.503331 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:06:59.503414 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:06:59.503414 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:06:59.503414 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:06:59.504712 master-0 kubenswrapper[7480]: I0308 22:06:59.503432 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:00.504304 master-0 kubenswrapper[7480]: I0308 22:07:00.504206 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:00.504304 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:00.504304 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:00.504304 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:00.504304 master-0 kubenswrapper[7480]: I0308 22:07:00.504303 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:01.502663 master-0 kubenswrapper[7480]: I0308 22:07:01.502560 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:01.502663 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:01.502663 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:01.502663 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:01.503063 master-0 kubenswrapper[7480]: I0308 22:07:01.503033 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:02.504312 master-0 kubenswrapper[7480]: I0308 22:07:02.504246 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:02.504312 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:02.504312 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:02.504312 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:02.505055 master-0 kubenswrapper[7480]: I0308 22:07:02.504348 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:03.505162 master-0 kubenswrapper[7480]: I0308 22:07:03.505042 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:03.505162 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:03.505162 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:03.505162 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:03.505162 master-0 kubenswrapper[7480]: I0308 22:07:03.505179 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:04.504596 master-0 kubenswrapper[7480]: I0308 22:07:04.504511 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:04.504596 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:04.504596 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:04.504596 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:04.506331 master-0 kubenswrapper[7480]: I0308 22:07:04.506196 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:05.504127 master-0 kubenswrapper[7480]: I0308 22:07:05.504026 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:05.504127 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:05.504127 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:05.504127 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:05.504563 master-0 kubenswrapper[7480]: I0308 22:07:05.504135 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:05.504563 master-0 kubenswrapper[7480]: I0308 22:07:05.504205 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:07:05.505156 master-0 kubenswrapper[7480]: I0308 22:07:05.505110 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8e67a6a8195a1bf0907601fa19ffa597a648c56ee5160c3ec3e81c5ecf98df23"} pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" containerMessage="Container router failed startup probe, will be restarted" Mar 08 22:07:05.505259 master-0 kubenswrapper[7480]: I0308 22:07:05.505179 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" containerID="cri-o://8e67a6a8195a1bf0907601fa19ffa597a648c56ee5160c3ec3e81c5ecf98df23" gracePeriod=3600 Mar 08 22:07:09.055605 master-0 kubenswrapper[7480]: I0308 22:07:09.055470 7480 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 22:07:09.056767 master-0 kubenswrapper[7480]: I0308 22:07:09.055914 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674" gracePeriod=30 Mar 08 22:07:09.056767 master-0 kubenswrapper[7480]: I0308 22:07:09.055989 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4" gracePeriod=30 Mar 08 22:07:09.058271 master-0 kubenswrapper[7480]: I0308 22:07:09.057937 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:07:09.058473 master-0 kubenswrapper[7480]: E0308 22:07:09.058420 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.058473 master-0 kubenswrapper[7480]: I0308 22:07:09.058465 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.058727 master-0 kubenswrapper[7480]: E0308 22:07:09.058493 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 22:07:09.058727 master-0 kubenswrapper[7480]: I0308 22:07:09.058512 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 22:07:09.058727 master-0 kubenswrapper[7480]: E0308 22:07:09.058548 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.058727 master-0 kubenswrapper[7480]: I0308 22:07:09.058568 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.058727 master-0 kubenswrapper[7480]: E0308 22:07:09.058604 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.058727 master-0 kubenswrapper[7480]: I0308 22:07:09.058621 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.059196 master-0 kubenswrapper[7480]: I0308 22:07:09.058883 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.059196 master-0 kubenswrapper[7480]: I0308 22:07:09.058919 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 22:07:09.059196 master-0 kubenswrapper[7480]: I0308 22:07:09.058955 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.059196 master-0 kubenswrapper[7480]: I0308 22:07:09.058980 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.059196 master-0 kubenswrapper[7480]: I0308 22:07:09.059009 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.059517 master-0 kubenswrapper[7480]: E0308 22:07:09.059300 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.059517 master-0 kubenswrapper[7480]: I0308 22:07:09.059332 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 22:07:09.062471 master-0 kubenswrapper[7480]: I0308 22:07:09.062395 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.117040 master-0 kubenswrapper[7480]: I0308 22:07:09.116940 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.117344 master-0 kubenswrapper[7480]: I0308 22:07:09.117199 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.139814 master-0 kubenswrapper[7480]: I0308 22:07:09.139729 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:07:09.218895 master-0 kubenswrapper[7480]: I0308 22:07:09.218809 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.219112 master-0 kubenswrapper[7480]: I0308 22:07:09.219003 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.219298 master-0 kubenswrapper[7480]: I0308 22:07:09.219251 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.219351 master-0 kubenswrapper[7480]: I0308 22:07:09.219302 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.234766 master-0 kubenswrapper[7480]: I0308 22:07:09.234694 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:07:09.322323 master-0 kubenswrapper[7480]: I0308 22:07:09.320044 7480 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="57553c71-144e-4ae1-a2d0-cb81a829e595" Mar 08 22:07:09.421842 master-0 kubenswrapper[7480]: I0308 22:07:09.421760 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 22:07:09.421842 master-0 kubenswrapper[7480]: I0308 22:07:09.421836 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.421913 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.421931 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.422038 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.422102 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.422106 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.422135 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:09.422208 master-0 kubenswrapper[7480]: I0308 22:07:09.422149 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:09.422563 master-0 kubenswrapper[7480]: I0308 22:07:09.422222 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:09.422563 master-0 kubenswrapper[7480]: I0308 22:07:09.422471 7480 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:09.422563 master-0 kubenswrapper[7480]: I0308 22:07:09.422494 7480 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:09.422563 master-0 kubenswrapper[7480]: I0308 22:07:09.422507 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:09.422563 master-0 kubenswrapper[7480]: I0308 22:07:09.422518 7480 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:09.422563 master-0 kubenswrapper[7480]: I0308 22:07:09.422529 7480 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:09.431885 master-0 kubenswrapper[7480]: I0308 22:07:09.431826 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:09.797387 master-0 kubenswrapper[7480]: I0308 22:07:09.797309 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 08 22:07:09.797895 master-0 kubenswrapper[7480]: I0308 22:07:09.797859 7480 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 08 22:07:09.826590 master-0 kubenswrapper[7480]: I0308 22:07:09.826528 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 22:07:09.826590 master-0 kubenswrapper[7480]: I0308 22:07:09.826577 7480 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="57553c71-144e-4ae1-a2d0-cb81a829e595" Mar 08 22:07:09.830258 master-0 kubenswrapper[7480]: I0308 22:07:09.830203 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 22:07:09.830328 master-0 kubenswrapper[7480]: I0308 22:07:09.830256 7480 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="57553c71-144e-4ae1-a2d0-cb81a829e595" Mar 08 22:07:09.841842 master-0 kubenswrapper[7480]: I0308 22:07:09.841779 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 08 22:07:09.842847 master-0 kubenswrapper[7480]: I0308 22:07:09.842810 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:09.845639 master-0 kubenswrapper[7480]: I0308 22:07:09.845216 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-97xjk" Mar 08 22:07:09.846450 master-0 kubenswrapper[7480]: I0308 22:07:09.846419 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 08 22:07:09.864545 master-0 kubenswrapper[7480]: I0308 22:07:09.864471 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 08 22:07:09.930156 master-0 kubenswrapper[7480]: I0308 22:07:09.930105 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-var-lock\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:09.930156 master-0 kubenswrapper[7480]: I0308 22:07:09.930163 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:09.930398 master-0 kubenswrapper[7480]: I0308 22:07:09.930250 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.032422 master-0 kubenswrapper[7480]: I0308 22:07:10.032370 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-var-lock\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.032584 master-0 kubenswrapper[7480]: I0308 22:07:10.032547 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-var-lock\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.032643 master-0 kubenswrapper[7480]: I0308 22:07:10.032623 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.032695 master-0 kubenswrapper[7480]: I0308 22:07:10.032676 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.032740 master-0 kubenswrapper[7480]: I0308 22:07:10.032714 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.056663 master-0 kubenswrapper[7480]: I0308 22:07:10.056584 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.134192 master-0 kubenswrapper[7480]: I0308 22:07:10.134113 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"7c8dd2936103822779238860b93c30ecc04ca409eda643b00bfa6d9998b13293"} Mar 08 22:07:10.134619 master-0 kubenswrapper[7480]: I0308 22:07:10.134225 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243"} Mar 08 22:07:10.134619 master-0 kubenswrapper[7480]: I0308 22:07:10.134246 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"d55e22a8fa78e8af31141e695d44afdaa4eea85b586433b1ae5a2ac2f30e6710"} Mar 08 22:07:10.138061 master-0 kubenswrapper[7480]: I0308 22:07:10.138011 7480 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4" exitCode=0 Mar 08 22:07:10.138061 master-0 kubenswrapper[7480]: I0308 22:07:10.138054 7480 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674" exitCode=0 Mar 08 22:07:10.138173 master-0 kubenswrapper[7480]: I0308 22:07:10.138112 7480 scope.go:117] "RemoveContainer" containerID="fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4" Mar 08 22:07:10.138173 master-0 kubenswrapper[7480]: I0308 22:07:10.138135 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 22:07:10.140911 master-0 kubenswrapper[7480]: I0308 22:07:10.140889 7480 generic.go:334] "Generic (PLEG): container finished" podID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerID="3eb560de291b5a27e85796d034a6bc8bf292b3b1a9fe462699eef23cc0bb8a73" exitCode=0 Mar 08 22:07:10.140979 master-0 kubenswrapper[7480]: I0308 22:07:10.140924 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"5a90a446-01fc-4032-9d02-d82e25084ea9","Type":"ContainerDied","Data":"3eb560de291b5a27e85796d034a6bc8bf292b3b1a9fe462699eef23cc0bb8a73"} Mar 08 22:07:10.165563 master-0 kubenswrapper[7480]: I0308 22:07:10.165519 7480 scope.go:117] "RemoveContainer" containerID="1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4" Mar 08 22:07:10.176339 master-0 kubenswrapper[7480]: I0308 22:07:10.175638 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:10.194188 master-0 kubenswrapper[7480]: I0308 22:07:10.194140 7480 scope.go:117] "RemoveContainer" containerID="fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674" Mar 08 22:07:10.213211 master-0 kubenswrapper[7480]: I0308 22:07:10.213179 7480 scope.go:117] "RemoveContainer" containerID="fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4" Mar 08 22:07:10.215389 master-0 kubenswrapper[7480]: E0308 22:07:10.215338 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4\": container with ID starting with fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4 not found: ID does not exist" containerID="fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4" Mar 08 22:07:10.215461 master-0 kubenswrapper[7480]: I0308 22:07:10.215412 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4"} err="failed to get container status \"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4\": rpc error: code = NotFound desc = could not find container \"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4\": container with ID starting with fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4 not found: ID does not exist" Mar 08 22:07:10.215461 master-0 kubenswrapper[7480]: I0308 22:07:10.215455 7480 scope.go:117] "RemoveContainer" containerID="1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4" Mar 08 22:07:10.216005 master-0 kubenswrapper[7480]: E0308 22:07:10.215972 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4\": container with ID starting with 1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4 not found: ID does not exist" containerID="1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4" Mar 08 22:07:10.216086 master-0 kubenswrapper[7480]: I0308 22:07:10.216010 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4"} err="failed to get container status \"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4\": rpc error: code = NotFound desc = could not find container \"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4\": container with ID starting with 1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4 not found: ID does not exist" Mar 08 22:07:10.216086 master-0 kubenswrapper[7480]: I0308 22:07:10.216046 7480 scope.go:117] "RemoveContainer" containerID="fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674" Mar 08 22:07:10.216559 master-0 kubenswrapper[7480]: E0308 22:07:10.216535 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674\": container with ID starting with fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674 not found: ID does not exist" containerID="fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674" Mar 08 22:07:10.216619 master-0 kubenswrapper[7480]: I0308 22:07:10.216556 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674"} err="failed to get container status \"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674\": rpc error: code = NotFound desc = could not find container \"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674\": container with ID starting with fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674 not found: ID does not exist" Mar 08 22:07:10.216619 master-0 kubenswrapper[7480]: I0308 22:07:10.216572 7480 scope.go:117] "RemoveContainer" containerID="fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4" Mar 08 22:07:10.216974 master-0 kubenswrapper[7480]: I0308 22:07:10.216948 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4"} err="failed to get container status \"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4\": rpc error: code = NotFound desc = could not find container \"fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4\": container with ID starting with fb45ca0ddc44058990168eeda3f6ed62790667fe93e6e2fe4d4fe2bd256830e4 not found: ID does not exist" Mar 08 22:07:10.216974 master-0 kubenswrapper[7480]: I0308 22:07:10.216969 7480 scope.go:117] "RemoveContainer" containerID="1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4" Mar 08 22:07:10.217381 master-0 kubenswrapper[7480]: I0308 22:07:10.217344 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4"} err="failed to get container status \"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4\": rpc error: code = NotFound desc = could not find container \"1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4\": container with ID starting with 1daa71553c686e610a1691e8e414cf629eaef6dfe3713445234ba8f4abd369b4 not found: ID does not exist" Mar 08 22:07:10.217439 master-0 kubenswrapper[7480]: I0308 22:07:10.217377 7480 scope.go:117] "RemoveContainer" containerID="fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674" Mar 08 22:07:10.217847 master-0 kubenswrapper[7480]: I0308 22:07:10.217810 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674"} err="failed to get container status \"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674\": rpc error: code = NotFound desc = could not find container \"fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674\": container with ID starting with fb0495eb908674ff43dd829011c8a9c25cb94f391266361a6fd4682f944d4674 not found: ID does not exist" Mar 08 22:07:10.667040 master-0 kubenswrapper[7480]: I0308 22:07:10.666987 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 08 22:07:10.675539 master-0 kubenswrapper[7480]: W0308 22:07:10.675469 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8f9a1ffa_fdef_4201_81a9_35b944f8c193.slice/crio-b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c WatchSource:0}: Error finding container b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c: Status 404 returned error can't find the container with id b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c Mar 08 22:07:11.155001 master-0 kubenswrapper[7480]: I0308 22:07:11.154893 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8f9a1ffa-fdef-4201-81a9-35b944f8c193","Type":"ContainerStarted","Data":"8b1f61f93e111d7a59ff7b3af6ad621f3547dafb0a9264256b214c4d46121676"} Mar 08 22:07:11.155001 master-0 kubenswrapper[7480]: I0308 22:07:11.154960 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8f9a1ffa-fdef-4201-81a9-35b944f8c193","Type":"ContainerStarted","Data":"b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c"} Mar 08 22:07:11.158907 master-0 kubenswrapper[7480]: I0308 22:07:11.158842 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c"} Mar 08 22:07:11.158907 master-0 kubenswrapper[7480]: I0308 22:07:11.158875 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4"} Mar 08 22:07:11.220055 master-0 kubenswrapper[7480]: I0308 22:07:11.219945 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.219916421 podStartE2EDuration="2.219916421s" podCreationTimestamp="2026-03-08 22:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:07:11.18639576 +0000 UTC m=+581.640016392" watchObservedRunningTime="2026-03-08 22:07:11.219916421 +0000 UTC m=+581.673537063" Mar 08 22:07:11.220874 master-0 kubenswrapper[7480]: I0308 22:07:11.220809 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.220800104 podStartE2EDuration="2.220800104s" podCreationTimestamp="2026-03-08 22:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:07:11.213523468 +0000 UTC m=+581.667144140" watchObservedRunningTime="2026-03-08 22:07:11.220800104 +0000 UTC m=+581.674420746" Mar 08 22:07:11.596562 master-0 kubenswrapper[7480]: I0308 22:07:11.596497 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:07:11.761495 master-0 kubenswrapper[7480]: I0308 22:07:11.761414 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-var-lock\") pod \"5a90a446-01fc-4032-9d02-d82e25084ea9\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " Mar 08 22:07:11.761828 master-0 kubenswrapper[7480]: I0308 22:07:11.761528 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a90a446-01fc-4032-9d02-d82e25084ea9-kube-api-access\") pod \"5a90a446-01fc-4032-9d02-d82e25084ea9\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " Mar 08 22:07:11.761828 master-0 kubenswrapper[7480]: I0308 22:07:11.761583 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-kubelet-dir\") pod \"5a90a446-01fc-4032-9d02-d82e25084ea9\" (UID: \"5a90a446-01fc-4032-9d02-d82e25084ea9\") " Mar 08 22:07:11.761982 master-0 kubenswrapper[7480]: I0308 22:07:11.761923 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a90a446-01fc-4032-9d02-d82e25084ea9" (UID: "5a90a446-01fc-4032-9d02-d82e25084ea9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:11.761982 master-0 kubenswrapper[7480]: I0308 22:07:11.761966 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-var-lock" (OuterVolumeSpecName: "var-lock") pod "5a90a446-01fc-4032-9d02-d82e25084ea9" (UID: "5a90a446-01fc-4032-9d02-d82e25084ea9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:11.766701 master-0 kubenswrapper[7480]: I0308 22:07:11.766646 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a90a446-01fc-4032-9d02-d82e25084ea9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a90a446-01fc-4032-9d02-d82e25084ea9" (UID: "5a90a446-01fc-4032-9d02-d82e25084ea9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:07:11.862856 master-0 kubenswrapper[7480]: I0308 22:07:11.862795 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a90a446-01fc-4032-9d02-d82e25084ea9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:11.862856 master-0 kubenswrapper[7480]: I0308 22:07:11.862847 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:11.862856 master-0 kubenswrapper[7480]: I0308 22:07:11.862856 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a90a446-01fc-4032-9d02-d82e25084ea9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:12.173440 master-0 kubenswrapper[7480]: I0308 22:07:12.173314 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"5a90a446-01fc-4032-9d02-d82e25084ea9","Type":"ContainerDied","Data":"9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c"} Mar 08 22:07:12.173440 master-0 kubenswrapper[7480]: I0308 22:07:12.173394 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c" Mar 08 22:07:12.174599 master-0 kubenswrapper[7480]: I0308 22:07:12.173544 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:07:19.432993 master-0 kubenswrapper[7480]: I0308 22:07:19.432893 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:19.432993 master-0 kubenswrapper[7480]: I0308 22:07:19.432969 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:19.432993 master-0 kubenswrapper[7480]: I0308 22:07:19.432985 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:19.432993 master-0 kubenswrapper[7480]: I0308 22:07:19.433000 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:19.442583 master-0 kubenswrapper[7480]: I0308 22:07:19.441429 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:19.442583 master-0 kubenswrapper[7480]: I0308 22:07:19.441635 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:20.244825 master-0 kubenswrapper[7480]: I0308 22:07:20.244768 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:20.245400 master-0 kubenswrapper[7480]: I0308 22:07:20.245340 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:27.848610 master-0 kubenswrapper[7480]: I0308 22:07:27.848535 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg"] Mar 08 22:07:27.849372 master-0 kubenswrapper[7480]: E0308 22:07:27.848813 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerName="installer" Mar 08 22:07:27.849372 master-0 kubenswrapper[7480]: I0308 22:07:27.848826 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerName="installer" Mar 08 22:07:27.849372 master-0 kubenswrapper[7480]: I0308 22:07:27.848969 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerName="installer" Mar 08 22:07:27.855388 master-0 kubenswrapper[7480]: I0308 22:07:27.855273 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.863340 master-0 kubenswrapper[7480]: I0308 22:07:27.863138 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 08 22:07:27.863340 master-0 kubenswrapper[7480]: I0308 22:07:27.863253 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 08 22:07:27.863340 master-0 kubenswrapper[7480]: I0308 22:07:27.863314 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 08 22:07:27.863674 master-0 kubenswrapper[7480]: I0308 22:07:27.863443 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 08 22:07:27.863674 master-0 kubenswrapper[7480]: I0308 22:07:27.863542 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 08 22:07:27.863674 master-0 kubenswrapper[7480]: I0308 22:07:27.863551 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-4jq4h" Mar 08 22:07:27.876376 master-0 kubenswrapper[7480]: I0308 22:07:27.874650 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 08 22:07:27.883911 master-0 kubenswrapper[7480]: I0308 22:07:27.883411 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg"] Mar 08 22:07:27.922539 master-0 kubenswrapper[7480]: I0308 22:07:27.922446 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922539 master-0 kubenswrapper[7480]: I0308 22:07:27.922534 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922906 master-0 kubenswrapper[7480]: I0308 22:07:27.922587 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922906 master-0 kubenswrapper[7480]: I0308 22:07:27.922611 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922906 master-0 kubenswrapper[7480]: I0308 22:07:27.922671 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922906 master-0 kubenswrapper[7480]: I0308 22:07:27.922694 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922906 master-0 kubenswrapper[7480]: I0308 22:07:27.922746 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2ch\" (UniqueName: \"kubernetes.io/projected/ecb3134a-ff4f-4069-8817-010b400296f6-kube-api-access-pq2ch\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.922906 master-0 kubenswrapper[7480]: I0308 22:07:27.922766 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:27.940896 master-0 kubenswrapper[7480]: I0308 22:07:27.940140 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xlrwk"] Mar 08 22:07:27.940896 master-0 kubenswrapper[7480]: I0308 22:07:27.940919 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:27.945065 master-0 kubenswrapper[7480]: I0308 22:07:27.943518 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 08 22:07:27.955092 master-0 kubenswrapper[7480]: I0308 22:07:27.951729 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-gnrft" Mar 08 22:07:28.024913 master-0 kubenswrapper[7480]: I0308 22:07:28.024834 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7147d808-f9a2-434c-ae54-77d82a3d2e1f-ready\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.025223 master-0 kubenswrapper[7480]: I0308 22:07:28.024930 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025223 master-0 kubenswrapper[7480]: I0308 22:07:28.024968 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7147d808-f9a2-434c-ae54-77d82a3d2e1f-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.025223 master-0 kubenswrapper[7480]: I0308 22:07:28.025002 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025378 master-0 kubenswrapper[7480]: I0308 22:07:28.025355 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025454 master-0 kubenswrapper[7480]: I0308 22:07:28.025415 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025505 master-0 kubenswrapper[7480]: I0308 22:07:28.025465 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025556 master-0 kubenswrapper[7480]: I0308 22:07:28.025515 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7147d808-f9a2-434c-ae54-77d82a3d2e1f-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.025556 master-0 kubenswrapper[7480]: I0308 22:07:28.025548 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqvd\" (UniqueName: \"kubernetes.io/projected/7147d808-f9a2-434c-ae54-77d82a3d2e1f-kube-api-access-dgqvd\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.025641 master-0 kubenswrapper[7480]: I0308 22:07:28.025580 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2ch\" (UniqueName: \"kubernetes.io/projected/ecb3134a-ff4f-4069-8817-010b400296f6-kube-api-access-pq2ch\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025685 master-0 kubenswrapper[7480]: I0308 22:07:28.025638 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.025685 master-0 kubenswrapper[7480]: I0308 22:07:28.025678 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.026695 master-0 kubenswrapper[7480]: I0308 22:07:28.026133 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.026695 master-0 kubenswrapper[7480]: I0308 22:07:28.026353 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.027281 master-0 kubenswrapper[7480]: I0308 22:07:28.027239 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.029511 master-0 kubenswrapper[7480]: I0308 22:07:28.029444 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.029603 master-0 kubenswrapper[7480]: I0308 22:07:28.029522 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.029904 master-0 kubenswrapper[7480]: I0308 22:07:28.029861 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.031770 master-0 kubenswrapper[7480]: I0308 22:07:28.031728 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.044866 master-0 kubenswrapper[7480]: I0308 22:07:28.044807 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2ch\" (UniqueName: \"kubernetes.io/projected/ecb3134a-ff4f-4069-8817-010b400296f6-kube-api-access-pq2ch\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.127515 master-0 kubenswrapper[7480]: I0308 22:07:28.127351 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7147d808-f9a2-434c-ae54-77d82a3d2e1f-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.127515 master-0 kubenswrapper[7480]: I0308 22:07:28.127417 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqvd\" (UniqueName: \"kubernetes.io/projected/7147d808-f9a2-434c-ae54-77d82a3d2e1f-kube-api-access-dgqvd\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.127844 master-0 kubenswrapper[7480]: I0308 22:07:28.127622 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7147d808-f9a2-434c-ae54-77d82a3d2e1f-ready\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.127844 master-0 kubenswrapper[7480]: I0308 22:07:28.127668 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7147d808-f9a2-434c-ae54-77d82a3d2e1f-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.128050 master-0 kubenswrapper[7480]: I0308 22:07:28.128013 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7147d808-f9a2-434c-ae54-77d82a3d2e1f-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.128420 master-0 kubenswrapper[7480]: I0308 22:07:28.128361 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7147d808-f9a2-434c-ae54-77d82a3d2e1f-ready\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.128614 master-0 kubenswrapper[7480]: I0308 22:07:28.128557 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7147d808-f9a2-434c-ae54-77d82a3d2e1f-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.146997 master-0 kubenswrapper[7480]: I0308 22:07:28.146927 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqvd\" (UniqueName: \"kubernetes.io/projected/7147d808-f9a2-434c-ae54-77d82a3d2e1f-kube-api-access-dgqvd\") pod \"cni-sysctl-allowlist-ds-xlrwk\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.189399 master-0 kubenswrapper[7480]: I0308 22:07:28.189325 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:07:28.267180 master-0 kubenswrapper[7480]: I0308 22:07:28.266672 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:28.288954 master-0 kubenswrapper[7480]: W0308 22:07:28.288887 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7147d808_f9a2_434c_ae54_77d82a3d2e1f.slice/crio-ea219c680a19acac705e94254b3b285a55f954107866c341dfd96d29ce5bfa38 WatchSource:0}: Error finding container ea219c680a19acac705e94254b3b285a55f954107866c341dfd96d29ce5bfa38: Status 404 returned error can't find the container with id ea219c680a19acac705e94254b3b285a55f954107866c341dfd96d29ce5bfa38 Mar 08 22:07:28.317829 master-0 kubenswrapper[7480]: I0308 22:07:28.317715 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" event={"ID":"7147d808-f9a2-434c-ae54-77d82a3d2e1f","Type":"ContainerStarted","Data":"ea219c680a19acac705e94254b3b285a55f954107866c341dfd96d29ce5bfa38"} Mar 08 22:07:28.471749 master-0 kubenswrapper[7480]: I0308 22:07:28.470357 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-9lhn8"] Mar 08 22:07:28.471749 master-0 kubenswrapper[7480]: I0308 22:07:28.471742 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.476754 master-0 kubenswrapper[7480]: I0308 22:07:28.476706 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-xjqqb" Mar 08 22:07:28.489657 master-0 kubenswrapper[7480]: I0308 22:07:28.489599 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-9lhn8"] Mar 08 22:07:28.640578 master-0 kubenswrapper[7480]: I0308 22:07:28.640532 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.640982 master-0 kubenswrapper[7480]: I0308 22:07:28.640957 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mrp\" (UniqueName: \"kubernetes.io/projected/00db426a-15d4-4737-a85e-b4cf6362c759-kube-api-access-86mrp\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.647588 master-0 kubenswrapper[7480]: I0308 22:07:28.647524 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg"] Mar 08 22:07:28.650213 master-0 kubenswrapper[7480]: W0308 22:07:28.650119 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb3134a_ff4f_4069_8817_010b400296f6.slice/crio-e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699 WatchSource:0}: Error finding container e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699: Status 404 returned error can't find the container with id e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699 Mar 08 22:07:28.743088 master-0 kubenswrapper[7480]: I0308 22:07:28.742846 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.743088 master-0 kubenswrapper[7480]: I0308 22:07:28.742986 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mrp\" (UniqueName: \"kubernetes.io/projected/00db426a-15d4-4737-a85e-b4cf6362c759-kube-api-access-86mrp\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.748318 master-0 kubenswrapper[7480]: I0308 22:07:28.747852 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.761959 master-0 kubenswrapper[7480]: I0308 22:07:28.761887 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mrp\" (UniqueName: \"kubernetes.io/projected/00db426a-15d4-4737-a85e-b4cf6362c759-kube-api-access-86mrp\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:28.810983 master-0 kubenswrapper[7480]: I0308 22:07:28.810918 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:07:29.284746 master-0 kubenswrapper[7480]: I0308 22:07:29.284681 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-9lhn8"] Mar 08 22:07:29.288434 master-0 kubenswrapper[7480]: W0308 22:07:29.288371 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00db426a_15d4_4737_a85e_b4cf6362c759.slice/crio-67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d WatchSource:0}: Error finding container 67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d: Status 404 returned error can't find the container with id 67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d Mar 08 22:07:29.328369 master-0 kubenswrapper[7480]: I0308 22:07:29.328295 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" event={"ID":"7147d808-f9a2-434c-ae54-77d82a3d2e1f","Type":"ContainerStarted","Data":"085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1"} Mar 08 22:07:29.329346 master-0 kubenswrapper[7480]: I0308 22:07:29.329277 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:29.333275 master-0 kubenswrapper[7480]: I0308 22:07:29.332645 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699"} Mar 08 22:07:29.339297 master-0 kubenswrapper[7480]: I0308 22:07:29.339245 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" event={"ID":"00db426a-15d4-4737-a85e-b4cf6362c759","Type":"ContainerStarted","Data":"67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d"} Mar 08 22:07:29.351951 master-0 kubenswrapper[7480]: I0308 22:07:29.351795 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" podStartSLOduration=2.351764915 podStartE2EDuration="2.351764915s" podCreationTimestamp="2026-03-08 22:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:07:29.346902142 +0000 UTC m=+599.800522764" watchObservedRunningTime="2026-03-08 22:07:29.351764915 +0000 UTC m=+599.805385527" Mar 08 22:07:29.372627 master-0 kubenswrapper[7480]: I0308 22:07:29.372542 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:07:30.352737 master-0 kubenswrapper[7480]: I0308 22:07:30.352656 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" event={"ID":"00db426a-15d4-4737-a85e-b4cf6362c759","Type":"ContainerStarted","Data":"b3b5ab2b0d8d50e18ad35cade1f6c161c02a82cb4cde7ef485b681883ca98cec"} Mar 08 22:07:30.354934 master-0 kubenswrapper[7480]: I0308 22:07:30.354476 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" event={"ID":"00db426a-15d4-4737-a85e-b4cf6362c759","Type":"ContainerStarted","Data":"20d694fb7dfac0a25e84f67b4332f4f50bd881d205956ffffe007db0387183da"} Mar 08 22:07:30.381564 master-0 kubenswrapper[7480]: I0308 22:07:30.380115 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" podStartSLOduration=2.3800563 podStartE2EDuration="2.3800563s" podCreationTimestamp="2026-03-08 22:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:07:30.373573654 +0000 UTC m=+600.827194266" watchObservedRunningTime="2026-03-08 22:07:30.3800563 +0000 UTC m=+600.833676902" Mar 08 22:07:30.450698 master-0 kubenswrapper[7480]: I0308 22:07:30.450610 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddw98"] Mar 08 22:07:30.451297 master-0 kubenswrapper[7480]: I0308 22:07:30.451219 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="kube-rbac-proxy" containerID="cri-o://b9a377863624adb6bc6cea75cc961084a7220374ccf2adc5f27393ba6245e41b" gracePeriod=30 Mar 08 22:07:30.451528 master-0 kubenswrapper[7480]: I0308 22:07:30.451473 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="multus-admission-controller" containerID="cri-o://fa30505314844ca92e33f96b4695dfb9bc34ac5a945fbb42bad40ad5f234fa56" gracePeriod=30 Mar 08 22:07:31.175650 master-0 kubenswrapper[7480]: I0308 22:07:31.169741 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xlrwk"] Mar 08 22:07:31.740516 master-0 kubenswrapper[7480]: I0308 22:07:31.740466 7480 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 22:07:32.383122 master-0 kubenswrapper[7480]: I0308 22:07:32.382983 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"39b49a99ba062a390ef6b5e55d7a6330fbf856db4c4f7d6e5517d23a5e71b49d"} Mar 08 22:07:32.388556 master-0 kubenswrapper[7480]: I0308 22:07:32.388411 7480 generic.go:334] "Generic (PLEG): container finished" podID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerID="b9a377863624adb6bc6cea75cc961084a7220374ccf2adc5f27393ba6245e41b" exitCode=0 Mar 08 22:07:32.388676 master-0 kubenswrapper[7480]: I0308 22:07:32.388463 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" event={"ID":"1dfc8afd-2330-46a4-ae5b-36522102b332","Type":"ContainerDied","Data":"b9a377863624adb6bc6cea75cc961084a7220374ccf2adc5f27393ba6245e41b"} Mar 08 22:07:32.389202 master-0 kubenswrapper[7480]: I0308 22:07:32.389134 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" gracePeriod=30 Mar 08 22:07:34.405582 master-0 kubenswrapper[7480]: I0308 22:07:34.405449 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"b62c2f59b7d3966761efe831860376676122986f3507dcafd946e48612f86ef4"} Mar 08 22:07:34.405582 master-0 kubenswrapper[7480]: I0308 22:07:34.405562 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"138d5d8619c73c03811c136abc660b710e532f1202c13d7d1602e706a526f68e"} Mar 08 22:07:34.453243 master-0 kubenswrapper[7480]: I0308 22:07:34.453099 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" podStartSLOduration=2.663067932 podStartE2EDuration="7.453030285s" podCreationTimestamp="2026-03-08 22:07:27 +0000 UTC" firstStartedPulling="2026-03-08 22:07:28.651760681 +0000 UTC m=+599.105381283" lastFinishedPulling="2026-03-08 22:07:33.441723024 +0000 UTC m=+603.895343636" observedRunningTime="2026-03-08 22:07:34.443354628 +0000 UTC m=+604.896975290" watchObservedRunningTime="2026-03-08 22:07:34.453030285 +0000 UTC m=+604.906650947" Mar 08 22:07:37.432204 master-0 kubenswrapper[7480]: I0308 22:07:37.432130 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 22:07:37.433188 master-0 kubenswrapper[7480]: I0308 22:07:37.433105 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.438505 master-0 kubenswrapper[7480]: I0308 22:07:37.438439 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-jszd4" Mar 08 22:07:37.441891 master-0 kubenswrapper[7480]: I0308 22:07:37.441793 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 22:07:37.453584 master-0 kubenswrapper[7480]: I0308 22:07:37.453518 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 22:07:37.610988 master-0 kubenswrapper[7480]: I0308 22:07:37.610904 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.610988 master-0 kubenswrapper[7480]: I0308 22:07:37.610971 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.611425 master-0 kubenswrapper[7480]: I0308 22:07:37.611024 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0b93ec-6ea0-4704-9449-57781a482ce4-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.712541 master-0 kubenswrapper[7480]: I0308 22:07:37.712376 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.712541 master-0 kubenswrapper[7480]: I0308 22:07:37.712457 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.712541 master-0 kubenswrapper[7480]: I0308 22:07:37.712513 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0b93ec-6ea0-4704-9449-57781a482ce4-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.712933 master-0 kubenswrapper[7480]: I0308 22:07:37.712841 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.713010 master-0 kubenswrapper[7480]: I0308 22:07:37.712936 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.736230 master-0 kubenswrapper[7480]: I0308 22:07:37.736143 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0b93ec-6ea0-4704-9449-57781a482ce4-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:37.757841 master-0 kubenswrapper[7480]: I0308 22:07:37.757753 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:07:38.182829 master-0 kubenswrapper[7480]: I0308 22:07:38.182759 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 22:07:38.195208 master-0 kubenswrapper[7480]: W0308 22:07:38.195142 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podee0b93ec_6ea0_4704_9449_57781a482ce4.slice/crio-2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c WatchSource:0}: Error finding container 2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c: Status 404 returned error can't find the container with id 2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c Mar 08 22:07:38.272177 master-0 kubenswrapper[7480]: E0308 22:07:38.270443 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:38.272653 master-0 kubenswrapper[7480]: E0308 22:07:38.272598 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:38.276180 master-0 kubenswrapper[7480]: E0308 22:07:38.274721 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:38.276180 master-0 kubenswrapper[7480]: E0308 22:07:38.274764 7480 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" Mar 08 22:07:38.440336 master-0 kubenswrapper[7480]: I0308 22:07:38.440145 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"ee0b93ec-6ea0-4704-9449-57781a482ce4","Type":"ContainerStarted","Data":"2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c"} Mar 08 22:07:39.448867 master-0 kubenswrapper[7480]: I0308 22:07:39.448768 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"ee0b93ec-6ea0-4704-9449-57781a482ce4","Type":"ContainerStarted","Data":"c38d9f8500098eb10c48b40a07d5d0aefa68c69ce87a29f847a74bc382b44913"} Mar 08 22:07:42.213865 master-0 kubenswrapper[7480]: E0308 22:07:42.213771 7480 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 08 22:07:42.214429 master-0 kubenswrapper[7480]: I0308 22:07:42.214152 7480 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 08 22:07:42.215008 master-0 kubenswrapper[7480]: I0308 22:07:42.214942 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263" gracePeriod=30 Mar 08 22:07:42.215134 master-0 kubenswrapper[7480]: I0308 22:07:42.215006 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f" gracePeriod=30 Mar 08 22:07:42.215461 master-0 kubenswrapper[7480]: I0308 22:07:42.215257 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f" gracePeriod=30 Mar 08 22:07:42.215461 master-0 kubenswrapper[7480]: I0308 22:07:42.215266 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e" gracePeriod=30 Mar 08 22:07:42.215582 master-0 kubenswrapper[7480]: I0308 22:07:42.215560 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e" gracePeriod=30 Mar 08 22:07:42.219943 master-0 kubenswrapper[7480]: I0308 22:07:42.219841 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 08 22:07:42.220415 master-0 kubenswrapper[7480]: E0308 22:07:42.220351 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 08 22:07:42.220415 master-0 kubenswrapper[7480]: I0308 22:07:42.220392 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 08 22:07:42.220518 master-0 kubenswrapper[7480]: E0308 22:07:42.220418 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 08 22:07:42.220518 master-0 kubenswrapper[7480]: I0308 22:07:42.220438 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 08 22:07:42.220518 master-0 kubenswrapper[7480]: E0308 22:07:42.220460 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 08 22:07:42.220518 master-0 kubenswrapper[7480]: I0308 22:07:42.220476 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 08 22:07:42.220518 master-0 kubenswrapper[7480]: E0308 22:07:42.220501 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: I0308 22:07:42.220520 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: E0308 22:07:42.220549 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: I0308 22:07:42.220568 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: E0308 22:07:42.220598 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: I0308 22:07:42.220614 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: E0308 22:07:42.220651 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: I0308 22:07:42.220668 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: E0308 22:07:42.220703 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 08 22:07:42.220710 master-0 kubenswrapper[7480]: I0308 22:07:42.220719 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 08 22:07:42.221148 master-0 kubenswrapper[7480]: I0308 22:07:42.220982 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 08 22:07:42.221148 master-0 kubenswrapper[7480]: I0308 22:07:42.221017 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 08 22:07:42.221148 master-0 kubenswrapper[7480]: I0308 22:07:42.221045 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 08 22:07:42.236106 master-0 kubenswrapper[7480]: I0308 22:07:42.235971 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 08 22:07:42.236212 master-0 kubenswrapper[7480]: I0308 22:07:42.236163 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 08 22:07:42.398009 master-0 kubenswrapper[7480]: I0308 22:07:42.397646 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.398009 master-0 kubenswrapper[7480]: I0308 22:07:42.397694 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.398009 master-0 kubenswrapper[7480]: I0308 22:07:42.397746 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.398009 master-0 kubenswrapper[7480]: I0308 22:07:42.397790 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.398009 master-0 kubenswrapper[7480]: I0308 22:07:42.397813 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.398009 master-0 kubenswrapper[7480]: I0308 22:07:42.397901 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.477916 master-0 kubenswrapper[7480]: I0308 22:07:42.477696 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 22:07:42.481127 master-0 kubenswrapper[7480]: I0308 22:07:42.479784 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 22:07:42.482481 master-0 kubenswrapper[7480]: I0308 22:07:42.482447 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f" exitCode=2 Mar 08 22:07:42.482481 master-0 kubenswrapper[7480]: I0308 22:07:42.482473 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e" exitCode=0 Mar 08 22:07:42.482481 master-0 kubenswrapper[7480]: I0308 22:07:42.482483 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e" exitCode=2 Mar 08 22:07:42.500166 master-0 kubenswrapper[7480]: I0308 22:07:42.499755 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.500166 master-0 kubenswrapper[7480]: I0308 22:07:42.499842 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.500166 master-0 kubenswrapper[7480]: I0308 22:07:42.499933 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.500166 master-0 kubenswrapper[7480]: I0308 22:07:42.499946 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.500166 master-0 kubenswrapper[7480]: I0308 22:07:42.500054 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.500166 master-0 kubenswrapper[7480]: I0308 22:07:42.500173 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.501220 master-0 kubenswrapper[7480]: I0308 22:07:42.500274 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.501220 master-0 kubenswrapper[7480]: I0308 22:07:42.500338 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.501220 master-0 kubenswrapper[7480]: I0308 22:07:42.500408 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.501220 master-0 kubenswrapper[7480]: I0308 22:07:42.500471 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.501220 master-0 kubenswrapper[7480]: I0308 22:07:42.500494 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:42.501220 master-0 kubenswrapper[7480]: I0308 22:07:42.500543 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:07:48.269375 master-0 kubenswrapper[7480]: E0308 22:07:48.269280 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:48.271089 master-0 kubenswrapper[7480]: E0308 22:07:48.271011 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:48.272346 master-0 kubenswrapper[7480]: E0308 22:07:48.272316 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:48.272413 master-0 kubenswrapper[7480]: E0308 22:07:48.272357 7480 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" Mar 08 22:07:52.584669 master-0 kubenswrapper[7480]: I0308 22:07:52.584573 7480 generic.go:334] "Generic (PLEG): container finished" podID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerID="8e67a6a8195a1bf0907601fa19ffa597a648c56ee5160c3ec3e81c5ecf98df23" exitCode=0 Mar 08 22:07:52.584669 master-0 kubenswrapper[7480]: I0308 22:07:52.584657 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerDied","Data":"8e67a6a8195a1bf0907601fa19ffa597a648c56ee5160c3ec3e81c5ecf98df23"} Mar 08 22:07:52.585653 master-0 kubenswrapper[7480]: I0308 22:07:52.584720 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"a9ff593041cd55425d50bbaa4be87eabe25dc7300e7e43dd725623d6f81a484c"} Mar 08 22:07:52.585653 master-0 kubenswrapper[7480]: I0308 22:07:52.584845 7480 scope.go:117] "RemoveContainer" containerID="043bea0bfcad80d082009c992d1913377d82e97e1ea5f2b55356dd0fdc8a2c8f" Mar 08 22:07:53.500621 master-0 kubenswrapper[7480]: I0308 22:07:53.500507 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:07:53.506667 master-0 kubenswrapper[7480]: I0308 22:07:53.506600 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:53.506667 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:53.506667 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:53.506667 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:53.507194 master-0 kubenswrapper[7480]: I0308 22:07:53.506694 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:54.505353 master-0 kubenswrapper[7480]: I0308 22:07:54.505204 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:54.505353 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:54.505353 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:54.505353 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:54.506759 master-0 kubenswrapper[7480]: I0308 22:07:54.505359 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:54.646473 master-0 kubenswrapper[7480]: I0308 22:07:54.646319 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:07:54.646473 master-0 kubenswrapper[7480]: I0308 22:07:54.646382 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" exitCode=1 Mar 08 22:07:54.646886 master-0 kubenswrapper[7480]: I0308 22:07:54.646844 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerDied","Data":"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243"} Mar 08 22:07:54.647506 master-0 kubenswrapper[7480]: I0308 22:07:54.647473 7480 scope.go:117] "RemoveContainer" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:07:55.501203 master-0 kubenswrapper[7480]: I0308 22:07:55.501140 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:07:55.504793 master-0 kubenswrapper[7480]: I0308 22:07:55.504723 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:55.504793 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:55.504793 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:55.504793 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:55.505083 master-0 kubenswrapper[7480]: I0308 22:07:55.504843 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:55.665211 master-0 kubenswrapper[7480]: I0308 22:07:55.665103 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:07:55.666406 master-0 kubenswrapper[7480]: I0308 22:07:55.665476 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9"} Mar 08 22:07:56.504352 master-0 kubenswrapper[7480]: I0308 22:07:56.504260 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:56.504352 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:56.504352 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:56.504352 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:56.504643 master-0 kubenswrapper[7480]: I0308 22:07:56.504404 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:56.677735 master-0 kubenswrapper[7480]: I0308 22:07:56.677555 7480 generic.go:334] "Generic (PLEG): container finished" podID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerID="8b1f61f93e111d7a59ff7b3af6ad621f3547dafb0a9264256b214c4d46121676" exitCode=0 Mar 08 22:07:56.677735 master-0 kubenswrapper[7480]: I0308 22:07:56.677628 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8f9a1ffa-fdef-4201-81a9-35b944f8c193","Type":"ContainerDied","Data":"8b1f61f93e111d7a59ff7b3af6ad621f3547dafb0a9264256b214c4d46121676"} Mar 08 22:07:57.504104 master-0 kubenswrapper[7480]: I0308 22:07:57.503966 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:57.504104 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:57.504104 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:57.504104 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:57.504765 master-0 kubenswrapper[7480]: I0308 22:07:57.504193 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:58.054208 master-0 kubenswrapper[7480]: I0308 22:07:58.054086 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:58.079906 master-0 kubenswrapper[7480]: I0308 22:07:58.079861 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kubelet-dir\") pod \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " Mar 08 22:07:58.080030 master-0 kubenswrapper[7480]: I0308 22:07:58.079926 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-var-lock\") pod \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " Mar 08 22:07:58.080030 master-0 kubenswrapper[7480]: I0308 22:07:58.079989 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f9a1ffa-fdef-4201-81a9-35b944f8c193" (UID: "8f9a1ffa-fdef-4201-81a9-35b944f8c193"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:58.080119 master-0 kubenswrapper[7480]: I0308 22:07:58.080054 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kube-api-access\") pod \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\" (UID: \"8f9a1ffa-fdef-4201-81a9-35b944f8c193\") " Mar 08 22:07:58.080178 master-0 kubenswrapper[7480]: I0308 22:07:58.080112 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-var-lock" (OuterVolumeSpecName: "var-lock") pod "8f9a1ffa-fdef-4201-81a9-35b944f8c193" (UID: "8f9a1ffa-fdef-4201-81a9-35b944f8c193"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:07:58.080435 master-0 kubenswrapper[7480]: I0308 22:07:58.080408 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:58.080435 master-0 kubenswrapper[7480]: I0308 22:07:58.080433 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:58.082613 master-0 kubenswrapper[7480]: I0308 22:07:58.082587 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f9a1ffa-fdef-4201-81a9-35b944f8c193" (UID: "8f9a1ffa-fdef-4201-81a9-35b944f8c193"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:07:58.181302 master-0 kubenswrapper[7480]: I0308 22:07:58.181229 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9a1ffa-fdef-4201-81a9-35b944f8c193-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:07:58.270293 master-0 kubenswrapper[7480]: E0308 22:07:58.270177 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:58.272531 master-0 kubenswrapper[7480]: E0308 22:07:58.272446 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:58.274745 master-0 kubenswrapper[7480]: E0308 22:07:58.274547 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:07:58.274745 master-0 kubenswrapper[7480]: E0308 22:07:58.274580 7480 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" Mar 08 22:07:58.503998 master-0 kubenswrapper[7480]: I0308 22:07:58.503788 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:58.503998 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:58.503998 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:58.503998 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:58.504425 master-0 kubenswrapper[7480]: I0308 22:07:58.504151 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:07:58.705388 master-0 kubenswrapper[7480]: I0308 22:07:58.705298 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8f9a1ffa-fdef-4201-81a9-35b944f8c193","Type":"ContainerDied","Data":"b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c"} Mar 08 22:07:58.705388 master-0 kubenswrapper[7480]: I0308 22:07:58.705366 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c" Mar 08 22:07:58.705796 master-0 kubenswrapper[7480]: I0308 22:07:58.705450 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 22:07:58.712066 master-0 kubenswrapper[7480]: I0308 22:07:58.711995 7480 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9" exitCode=1 Mar 08 22:07:58.712066 master-0 kubenswrapper[7480]: I0308 22:07:58.712048 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9"} Mar 08 22:07:58.712296 master-0 kubenswrapper[7480]: I0308 22:07:58.712115 7480 scope.go:117] "RemoveContainer" containerID="f50874fd44a38fe2052c0dd021aa5c5eab2b987367eeee5b46f35dae49f0f668" Mar 08 22:07:58.713112 master-0 kubenswrapper[7480]: I0308 22:07:58.713012 7480 scope.go:117] "RemoveContainer" containerID="b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9" Mar 08 22:07:58.713454 master-0 kubenswrapper[7480]: E0308 22:07:58.713397 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" Mar 08 22:07:58.853106 master-0 kubenswrapper[7480]: E0308 22:07:58.852904 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:07:59.432340 master-0 kubenswrapper[7480]: I0308 22:07:59.432260 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:59.432340 master-0 kubenswrapper[7480]: I0308 22:07:59.432320 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:59.441764 master-0 kubenswrapper[7480]: I0308 22:07:59.441696 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:07:59.502827 master-0 kubenswrapper[7480]: I0308 22:07:59.502757 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:07:59.502827 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:07:59.502827 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:07:59.502827 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:07:59.503261 master-0 kubenswrapper[7480]: I0308 22:07:59.502841 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:00.502647 master-0 kubenswrapper[7480]: I0308 22:08:00.502520 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:00.502647 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:00.502647 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:00.502647 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:00.502647 master-0 kubenswrapper[7480]: I0308 22:08:00.502599 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:00.738510 master-0 kubenswrapper[7480]: I0308 22:08:00.738339 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-ddw98_1dfc8afd-2330-46a4-ae5b-36522102b332/multus-admission-controller/0.log" Mar 08 22:08:00.738510 master-0 kubenswrapper[7480]: I0308 22:08:00.738424 7480 generic.go:334] "Generic (PLEG): container finished" podID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerID="fa30505314844ca92e33f96b4695dfb9bc34ac5a945fbb42bad40ad5f234fa56" exitCode=137 Mar 08 22:08:00.738510 master-0 kubenswrapper[7480]: I0308 22:08:00.738469 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" event={"ID":"1dfc8afd-2330-46a4-ae5b-36522102b332","Type":"ContainerDied","Data":"fa30505314844ca92e33f96b4695dfb9bc34ac5a945fbb42bad40ad5f234fa56"} Mar 08 22:08:01.399594 master-0 kubenswrapper[7480]: I0308 22:08:01.399503 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-ddw98_1dfc8afd-2330-46a4-ae5b-36522102b332/multus-admission-controller/0.log" Mar 08 22:08:01.399901 master-0 kubenswrapper[7480]: I0308 22:08:01.399640 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 22:08:01.445812 master-0 kubenswrapper[7480]: I0308 22:08:01.445725 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") pod \"1dfc8afd-2330-46a4-ae5b-36522102b332\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " Mar 08 22:08:01.446421 master-0 kubenswrapper[7480]: I0308 22:08:01.445838 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") pod \"1dfc8afd-2330-46a4-ae5b-36522102b332\" (UID: \"1dfc8afd-2330-46a4-ae5b-36522102b332\") " Mar 08 22:08:01.451434 master-0 kubenswrapper[7480]: I0308 22:08:01.451347 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk" (OuterVolumeSpecName: "kube-api-access-jtbpk") pod "1dfc8afd-2330-46a4-ae5b-36522102b332" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332"). InnerVolumeSpecName "kube-api-access-jtbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:08:01.451434 master-0 kubenswrapper[7480]: I0308 22:08:01.451387 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "1dfc8afd-2330-46a4-ae5b-36522102b332" (UID: "1dfc8afd-2330-46a4-ae5b-36522102b332"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:08:01.503999 master-0 kubenswrapper[7480]: I0308 22:08:01.503890 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:01.503999 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:01.503999 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:01.503999 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:01.505628 master-0 kubenswrapper[7480]: I0308 22:08:01.504007 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:01.548205 master-0 kubenswrapper[7480]: I0308 22:08:01.548033 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtbpk\" (UniqueName: \"kubernetes.io/projected/1dfc8afd-2330-46a4-ae5b-36522102b332-kube-api-access-jtbpk\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:01.548453 master-0 kubenswrapper[7480]: I0308 22:08:01.548222 7480 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1dfc8afd-2330-46a4-ae5b-36522102b332-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:01.752157 master-0 kubenswrapper[7480]: I0308 22:08:01.751663 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-ddw98_1dfc8afd-2330-46a4-ae5b-36522102b332/multus-admission-controller/0.log" Mar 08 22:08:01.752157 master-0 kubenswrapper[7480]: I0308 22:08:01.751744 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" event={"ID":"1dfc8afd-2330-46a4-ae5b-36522102b332","Type":"ContainerDied","Data":"ab657f98950abde628b198898d3905a5958a770bb1ea4d2bf6b9cc5f024cadc1"} Mar 08 22:08:01.752157 master-0 kubenswrapper[7480]: I0308 22:08:01.751797 7480 scope.go:117] "RemoveContainer" containerID="b9a377863624adb6bc6cea75cc961084a7220374ccf2adc5f27393ba6245e41b" Mar 08 22:08:01.752157 master-0 kubenswrapper[7480]: I0308 22:08:01.751847 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddw98" Mar 08 22:08:01.780443 master-0 kubenswrapper[7480]: I0308 22:08:01.780376 7480 scope.go:117] "RemoveContainer" containerID="fa30505314844ca92e33f96b4695dfb9bc34ac5a945fbb42bad40ad5f234fa56" Mar 08 22:08:02.328131 master-0 kubenswrapper[7480]: E0308 22:08:02.327787 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:07:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:07:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:07:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:07:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:02.504626 master-0 kubenswrapper[7480]: I0308 22:08:02.504517 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:02.504626 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:02.504626 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:02.504626 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:02.505648 master-0 kubenswrapper[7480]: I0308 22:08:02.504643 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:02.545644 master-0 kubenswrapper[7480]: I0308 22:08:02.545570 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xlrwk_7147d808-f9a2-434c-ae54-77d82a3d2e1f/kube-multus-additional-cni-plugins/0.log" Mar 08 22:08:02.545896 master-0 kubenswrapper[7480]: I0308 22:08:02.545699 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:08:02.563233 master-0 kubenswrapper[7480]: I0308 22:08:02.563163 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7147d808-f9a2-434c-ae54-77d82a3d2e1f-tuning-conf-dir\") pod \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " Mar 08 22:08:02.563320 master-0 kubenswrapper[7480]: I0308 22:08:02.563295 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7147d808-f9a2-434c-ae54-77d82a3d2e1f-ready\") pod \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " Mar 08 22:08:02.563417 master-0 kubenswrapper[7480]: I0308 22:08:02.563360 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7147d808-f9a2-434c-ae54-77d82a3d2e1f-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "7147d808-f9a2-434c-ae54-77d82a3d2e1f" (UID: "7147d808-f9a2-434c-ae54-77d82a3d2e1f"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:02.563513 master-0 kubenswrapper[7480]: I0308 22:08:02.563476 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgqvd\" (UniqueName: \"kubernetes.io/projected/7147d808-f9a2-434c-ae54-77d82a3d2e1f-kube-api-access-dgqvd\") pod \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " Mar 08 22:08:02.563594 master-0 kubenswrapper[7480]: I0308 22:08:02.563563 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7147d808-f9a2-434c-ae54-77d82a3d2e1f-cni-sysctl-allowlist\") pod \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\" (UID: \"7147d808-f9a2-434c-ae54-77d82a3d2e1f\") " Mar 08 22:08:02.563960 master-0 kubenswrapper[7480]: I0308 22:08:02.563920 7480 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7147d808-f9a2-434c-ae54-77d82a3d2e1f-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:02.564183 master-0 kubenswrapper[7480]: I0308 22:08:02.564061 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7147d808-f9a2-434c-ae54-77d82a3d2e1f-ready" (OuterVolumeSpecName: "ready") pod "7147d808-f9a2-434c-ae54-77d82a3d2e1f" (UID: "7147d808-f9a2-434c-ae54-77d82a3d2e1f"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:08:02.564757 master-0 kubenswrapper[7480]: I0308 22:08:02.564672 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7147d808-f9a2-434c-ae54-77d82a3d2e1f-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7147d808-f9a2-434c-ae54-77d82a3d2e1f" (UID: "7147d808-f9a2-434c-ae54-77d82a3d2e1f"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:08:02.568662 master-0 kubenswrapper[7480]: I0308 22:08:02.568607 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147d808-f9a2-434c-ae54-77d82a3d2e1f-kube-api-access-dgqvd" (OuterVolumeSpecName: "kube-api-access-dgqvd") pod "7147d808-f9a2-434c-ae54-77d82a3d2e1f" (UID: "7147d808-f9a2-434c-ae54-77d82a3d2e1f"). InnerVolumeSpecName "kube-api-access-dgqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:08:02.665608 master-0 kubenswrapper[7480]: I0308 22:08:02.665497 7480 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7147d808-f9a2-434c-ae54-77d82a3d2e1f-ready\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:02.665608 master-0 kubenswrapper[7480]: I0308 22:08:02.665568 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgqvd\" (UniqueName: \"kubernetes.io/projected/7147d808-f9a2-434c-ae54-77d82a3d2e1f-kube-api-access-dgqvd\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:02.665608 master-0 kubenswrapper[7480]: I0308 22:08:02.665590 7480 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7147d808-f9a2-434c-ae54-77d82a3d2e1f-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:02.763066 master-0 kubenswrapper[7480]: I0308 22:08:02.762966 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xlrwk_7147d808-f9a2-434c-ae54-77d82a3d2e1f/kube-multus-additional-cni-plugins/0.log" Mar 08 22:08:02.763066 master-0 kubenswrapper[7480]: I0308 22:08:02.763053 7480 generic.go:334] "Generic (PLEG): container finished" podID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" exitCode=137 Mar 08 22:08:02.763333 master-0 kubenswrapper[7480]: I0308 22:08:02.763235 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" Mar 08 22:08:02.763567 master-0 kubenswrapper[7480]: I0308 22:08:02.763449 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" event={"ID":"7147d808-f9a2-434c-ae54-77d82a3d2e1f","Type":"ContainerDied","Data":"085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1"} Mar 08 22:08:02.763647 master-0 kubenswrapper[7480]: I0308 22:08:02.763624 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" event={"ID":"7147d808-f9a2-434c-ae54-77d82a3d2e1f","Type":"ContainerDied","Data":"ea219c680a19acac705e94254b3b285a55f954107866c341dfd96d29ce5bfa38"} Mar 08 22:08:02.763767 master-0 kubenswrapper[7480]: I0308 22:08:02.763697 7480 scope.go:117] "RemoveContainer" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" Mar 08 22:08:02.790612 master-0 kubenswrapper[7480]: I0308 22:08:02.790472 7480 scope.go:117] "RemoveContainer" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" Mar 08 22:08:02.791370 master-0 kubenswrapper[7480]: E0308 22:08:02.791298 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1\": container with ID starting with 085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1 not found: ID does not exist" containerID="085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1" Mar 08 22:08:02.791495 master-0 kubenswrapper[7480]: I0308 22:08:02.791383 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1"} err="failed to get container status \"085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1\": rpc error: code = NotFound desc = could not find container \"085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1\": container with ID starting with 085c2b709340e9e7b1c10997c84791d15d8a29ba0dddbf267ba43144aeb516e1 not found: ID does not exist" Mar 08 22:08:03.504505 master-0 kubenswrapper[7480]: I0308 22:08:03.504417 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:03.504505 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:03.504505 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:03.504505 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:03.506058 master-0 kubenswrapper[7480]: I0308 22:08:03.505999 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:04.504377 master-0 kubenswrapper[7480]: I0308 22:08:04.504254 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:04.504377 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:04.504377 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:04.504377 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:04.504869 master-0 kubenswrapper[7480]: I0308 22:08:04.504404 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:05.503583 master-0 kubenswrapper[7480]: I0308 22:08:05.503466 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:05.503583 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:05.503583 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:05.503583 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:05.504833 master-0 kubenswrapper[7480]: I0308 22:08:05.503591 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:06.504968 master-0 kubenswrapper[7480]: I0308 22:08:06.504849 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:06.504968 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:06.504968 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:06.504968 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:06.506153 master-0 kubenswrapper[7480]: I0308 22:08:06.504985 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:07.504538 master-0 kubenswrapper[7480]: I0308 22:08:07.504381 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:07.504538 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:07.504538 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:07.504538 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:07.505761 master-0 kubenswrapper[7480]: I0308 22:08:07.504553 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:08.504300 master-0 kubenswrapper[7480]: I0308 22:08:08.504218 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:08.504300 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:08.504300 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:08.504300 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:08.505118 master-0 kubenswrapper[7480]: I0308 22:08:08.504328 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:08.854287 master-0 kubenswrapper[7480]: E0308 22:08:08.854208 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:09.439632 master-0 kubenswrapper[7480]: I0308 22:08:09.439557 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:08:09.502679 master-0 kubenswrapper[7480]: I0308 22:08:09.502606 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:09.502679 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:09.502679 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:09.502679 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:09.503205 master-0 kubenswrapper[7480]: I0308 22:08:09.502697 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:10.505372 master-0 kubenswrapper[7480]: I0308 22:08:10.505280 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:10.505372 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:10.505372 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:10.505372 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:10.506459 master-0 kubenswrapper[7480]: I0308 22:08:10.505390 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:11.504296 master-0 kubenswrapper[7480]: I0308 22:08:11.504194 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:11.504296 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:11.504296 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:11.504296 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:11.505127 master-0 kubenswrapper[7480]: I0308 22:08:11.504319 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:12.329115 master-0 kubenswrapper[7480]: E0308 22:08:12.329051 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:12.504707 master-0 kubenswrapper[7480]: I0308 22:08:12.504486 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:12.504707 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:12.504707 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:12.504707 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:12.504707 master-0 kubenswrapper[7480]: I0308 22:08:12.504619 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:12.780967 master-0 kubenswrapper[7480]: I0308 22:08:12.780907 7480 scope.go:117] "RemoveContainer" containerID="b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9" Mar 08 22:08:12.853558 master-0 kubenswrapper[7480]: I0308 22:08:12.853489 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 22:08:12.856379 master-0 kubenswrapper[7480]: I0308 22:08:12.856338 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 22:08:12.857602 master-0 kubenswrapper[7480]: I0308 22:08:12.857557 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 08 22:08:12.858340 master-0 kubenswrapper[7480]: I0308 22:08:12.858296 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 08 22:08:12.860773 master-0 kubenswrapper[7480]: I0308 22:08:12.860708 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 22:08:12.868348 master-0 kubenswrapper[7480]: I0308 22:08:12.868303 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 22:08:12.870022 master-0 kubenswrapper[7480]: I0308 22:08:12.869964 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 22:08:12.871336 master-0 kubenswrapper[7480]: I0308 22:08:12.871292 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 08 22:08:12.872536 master-0 kubenswrapper[7480]: I0308 22:08:12.872464 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 08 22:08:12.874611 master-0 kubenswrapper[7480]: I0308 22:08:12.874537 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f" exitCode=137 Mar 08 22:08:12.874611 master-0 kubenswrapper[7480]: I0308 22:08:12.874597 7480 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263" exitCode=137 Mar 08 22:08:12.874816 master-0 kubenswrapper[7480]: I0308 22:08:12.874684 7480 scope.go:117] "RemoveContainer" containerID="fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f" Mar 08 22:08:12.874816 master-0 kubenswrapper[7480]: I0308 22:08:12.874785 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 22:08:12.897503 master-0 kubenswrapper[7480]: I0308 22:08:12.897456 7480 scope.go:117] "RemoveContainer" containerID="0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e" Mar 08 22:08:12.914550 master-0 kubenswrapper[7480]: I0308 22:08:12.914496 7480 scope.go:117] "RemoveContainer" containerID="9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e" Mar 08 22:08:12.942694 master-0 kubenswrapper[7480]: I0308 22:08:12.942623 7480 scope.go:117] "RemoveContainer" containerID="a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f" Mar 08 22:08:12.973713 master-0 kubenswrapper[7480]: I0308 22:08:12.973656 7480 scope.go:117] "RemoveContainer" containerID="17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263" Mar 08 22:08:12.997789 master-0 kubenswrapper[7480]: I0308 22:08:12.997732 7480 scope.go:117] "RemoveContainer" containerID="2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa" Mar 08 22:08:13.018041 master-0 kubenswrapper[7480]: I0308 22:08:13.017964 7480 scope.go:117] "RemoveContainer" containerID="9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489" Mar 08 22:08:13.022635 master-0 kubenswrapper[7480]: I0308 22:08:13.022555 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 22:08:13.022747 master-0 kubenswrapper[7480]: I0308 22:08:13.022684 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 22:08:13.022747 master-0 kubenswrapper[7480]: I0308 22:08:13.022713 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022760 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022788 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022788 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022836 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022848 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022871 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:13.022908 master-0 kubenswrapper[7480]: I0308 22:08:13.022907 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.022954 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.022992 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.023349 7480 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.023369 7480 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.023381 7480 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.023392 7480 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.023403 7480 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:13.023469 master-0 kubenswrapper[7480]: I0308 22:08:13.023415 7480 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:13.049974 master-0 kubenswrapper[7480]: I0308 22:08:13.049747 7480 scope.go:117] "RemoveContainer" containerID="528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08" Mar 08 22:08:13.068464 master-0 kubenswrapper[7480]: I0308 22:08:13.068398 7480 scope.go:117] "RemoveContainer" containerID="fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f" Mar 08 22:08:13.068972 master-0 kubenswrapper[7480]: E0308 22:08:13.068923 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f\": container with ID starting with fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f not found: ID does not exist" containerID="fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f" Mar 08 22:08:13.069204 master-0 kubenswrapper[7480]: I0308 22:08:13.068962 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f"} err="failed to get container status \"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f\": rpc error: code = NotFound desc = could not find container \"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f\": container with ID starting with fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f not found: ID does not exist" Mar 08 22:08:13.069204 master-0 kubenswrapper[7480]: I0308 22:08:13.069010 7480 scope.go:117] "RemoveContainer" containerID="0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e" Mar 08 22:08:13.069824 master-0 kubenswrapper[7480]: E0308 22:08:13.069538 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e\": container with ID starting with 0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e not found: ID does not exist" containerID="0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e" Mar 08 22:08:13.069824 master-0 kubenswrapper[7480]: I0308 22:08:13.069623 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e"} err="failed to get container status \"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e\": rpc error: code = NotFound desc = could not find container \"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e\": container with ID starting with 0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e not found: ID does not exist" Mar 08 22:08:13.069824 master-0 kubenswrapper[7480]: I0308 22:08:13.069670 7480 scope.go:117] "RemoveContainer" containerID="9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e" Mar 08 22:08:13.070131 master-0 kubenswrapper[7480]: E0308 22:08:13.070065 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e\": container with ID starting with 9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e not found: ID does not exist" containerID="9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e" Mar 08 22:08:13.070208 master-0 kubenswrapper[7480]: I0308 22:08:13.070124 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e"} err="failed to get container status \"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e\": rpc error: code = NotFound desc = could not find container \"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e\": container with ID starting with 9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e not found: ID does not exist" Mar 08 22:08:13.070208 master-0 kubenswrapper[7480]: I0308 22:08:13.070146 7480 scope.go:117] "RemoveContainer" containerID="a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f" Mar 08 22:08:13.070789 master-0 kubenswrapper[7480]: E0308 22:08:13.070606 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f\": container with ID starting with a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f not found: ID does not exist" containerID="a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f" Mar 08 22:08:13.070789 master-0 kubenswrapper[7480]: I0308 22:08:13.070655 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f"} err="failed to get container status \"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f\": rpc error: code = NotFound desc = could not find container \"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f\": container with ID starting with a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f not found: ID does not exist" Mar 08 22:08:13.070789 master-0 kubenswrapper[7480]: I0308 22:08:13.070674 7480 scope.go:117] "RemoveContainer" containerID="17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263" Mar 08 22:08:13.071202 master-0 kubenswrapper[7480]: E0308 22:08:13.071141 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263\": container with ID starting with 17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263 not found: ID does not exist" containerID="17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263" Mar 08 22:08:13.071303 master-0 kubenswrapper[7480]: I0308 22:08:13.071198 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263"} err="failed to get container status \"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263\": rpc error: code = NotFound desc = could not find container \"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263\": container with ID starting with 17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263 not found: ID does not exist" Mar 08 22:08:13.071303 master-0 kubenswrapper[7480]: I0308 22:08:13.071217 7480 scope.go:117] "RemoveContainer" containerID="2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa" Mar 08 22:08:13.071878 master-0 kubenswrapper[7480]: E0308 22:08:13.071822 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa\": container with ID starting with 2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa not found: ID does not exist" containerID="2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa" Mar 08 22:08:13.071979 master-0 kubenswrapper[7480]: I0308 22:08:13.071889 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa"} err="failed to get container status \"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa\": rpc error: code = NotFound desc = could not find container \"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa\": container with ID starting with 2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa not found: ID does not exist" Mar 08 22:08:13.072064 master-0 kubenswrapper[7480]: I0308 22:08:13.072003 7480 scope.go:117] "RemoveContainer" containerID="9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489" Mar 08 22:08:13.073542 master-0 kubenswrapper[7480]: E0308 22:08:13.073335 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489\": container with ID starting with 9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489 not found: ID does not exist" containerID="9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489" Mar 08 22:08:13.073542 master-0 kubenswrapper[7480]: I0308 22:08:13.073388 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489"} err="failed to get container status \"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489\": rpc error: code = NotFound desc = could not find container \"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489\": container with ID starting with 9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489 not found: ID does not exist" Mar 08 22:08:13.073542 master-0 kubenswrapper[7480]: I0308 22:08:13.073421 7480 scope.go:117] "RemoveContainer" containerID="528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08" Mar 08 22:08:13.073845 master-0 kubenswrapper[7480]: E0308 22:08:13.073762 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08\": container with ID starting with 528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08 not found: ID does not exist" containerID="528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08" Mar 08 22:08:13.073845 master-0 kubenswrapper[7480]: I0308 22:08:13.073796 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08"} err="failed to get container status \"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08\": rpc error: code = NotFound desc = could not find container \"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08\": container with ID starting with 528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08 not found: ID does not exist" Mar 08 22:08:13.073845 master-0 kubenswrapper[7480]: I0308 22:08:13.073825 7480 scope.go:117] "RemoveContainer" containerID="fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f" Mar 08 22:08:13.074425 master-0 kubenswrapper[7480]: I0308 22:08:13.074292 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f"} err="failed to get container status \"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f\": rpc error: code = NotFound desc = could not find container \"fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f\": container with ID starting with fce1d0b33f79a5db2c834c3acb6708a2b4525d393f2127dc9ac29f5cb4e7d10f not found: ID does not exist" Mar 08 22:08:13.074425 master-0 kubenswrapper[7480]: I0308 22:08:13.074316 7480 scope.go:117] "RemoveContainer" containerID="0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e" Mar 08 22:08:13.075321 master-0 kubenswrapper[7480]: I0308 22:08:13.075251 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e"} err="failed to get container status \"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e\": rpc error: code = NotFound desc = could not find container \"0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e\": container with ID starting with 0434eeabd4f2ba088632cac4cb23a98f511011740d9d088962f7101dda68fb2e not found: ID does not exist" Mar 08 22:08:13.075321 master-0 kubenswrapper[7480]: I0308 22:08:13.075303 7480 scope.go:117] "RemoveContainer" containerID="9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e" Mar 08 22:08:13.076278 master-0 kubenswrapper[7480]: I0308 22:08:13.076113 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e"} err="failed to get container status \"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e\": rpc error: code = NotFound desc = could not find container \"9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e\": container with ID starting with 9dab50817c32126dd044c242d5caff6d18e7c835ca9c9b1834ec3d5b22d1386e not found: ID does not exist" Mar 08 22:08:13.076278 master-0 kubenswrapper[7480]: I0308 22:08:13.076156 7480 scope.go:117] "RemoveContainer" containerID="a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f" Mar 08 22:08:13.076634 master-0 kubenswrapper[7480]: I0308 22:08:13.076579 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f"} err="failed to get container status \"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f\": rpc error: code = NotFound desc = could not find container \"a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f\": container with ID starting with a69f74d53c596ef3b12b5da1bd7cc9ace4063b0146a15cbb35be1605265e652f not found: ID does not exist" Mar 08 22:08:13.076634 master-0 kubenswrapper[7480]: I0308 22:08:13.076610 7480 scope.go:117] "RemoveContainer" containerID="17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263" Mar 08 22:08:13.077145 master-0 kubenswrapper[7480]: I0308 22:08:13.076916 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263"} err="failed to get container status \"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263\": rpc error: code = NotFound desc = could not find container \"17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263\": container with ID starting with 17f9d25fbb914d5a3dabb518816b0dd89ec825230d824bcb65e8fd78aa107263 not found: ID does not exist" Mar 08 22:08:13.077145 master-0 kubenswrapper[7480]: I0308 22:08:13.076961 7480 scope.go:117] "RemoveContainer" containerID="2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa" Mar 08 22:08:13.077640 master-0 kubenswrapper[7480]: I0308 22:08:13.077601 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa"} err="failed to get container status \"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa\": rpc error: code = NotFound desc = could not find container \"2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa\": container with ID starting with 2fd8e5815a9775ccbbdb270e0d056e236703cf28a68a240b97100d92c494a2aa not found: ID does not exist" Mar 08 22:08:13.077640 master-0 kubenswrapper[7480]: I0308 22:08:13.077634 7480 scope.go:117] "RemoveContainer" containerID="9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489" Mar 08 22:08:13.078188 master-0 kubenswrapper[7480]: I0308 22:08:13.078151 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489"} err="failed to get container status \"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489\": rpc error: code = NotFound desc = could not find container \"9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489\": container with ID starting with 9bb960964f2aa5ec9092fb22294453593afb84fc828f2c73db9eceae206a6489 not found: ID does not exist" Mar 08 22:08:13.078188 master-0 kubenswrapper[7480]: I0308 22:08:13.078180 7480 scope.go:117] "RemoveContainer" containerID="528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08" Mar 08 22:08:13.078616 master-0 kubenswrapper[7480]: I0308 22:08:13.078567 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08"} err="failed to get container status \"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08\": rpc error: code = NotFound desc = could not find container \"528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08\": container with ID starting with 528903c482dba26f9b2b79cd20a575e2b34c58d798336813cdf2368407e7ac08 not found: ID does not exist" Mar 08 22:08:13.503553 master-0 kubenswrapper[7480]: I0308 22:08:13.503474 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:13.503553 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:13.503553 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:13.503553 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:13.505246 master-0 kubenswrapper[7480]: I0308 22:08:13.503568 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:13.797538 master-0 kubenswrapper[7480]: I0308 22:08:13.797342 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 08 22:08:13.889014 master-0 kubenswrapper[7480]: I0308 22:08:13.888955 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"e3a61e0f18998d1659f1848d9ff8c4de1817df1723214bfa069260c375e7739f"} Mar 08 22:08:14.504393 master-0 kubenswrapper[7480]: I0308 22:08:14.504284 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:14.504393 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:14.504393 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:14.504393 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:14.505612 master-0 kubenswrapper[7480]: I0308 22:08:14.504396 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:15.504043 master-0 kubenswrapper[7480]: I0308 22:08:15.503931 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:15.504043 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:15.504043 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:15.504043 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:15.504043 master-0 kubenswrapper[7480]: I0308 22:08:15.504029 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:16.236739 master-0 kubenswrapper[7480]: E0308 22:08:16.236510 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189afd13325e7889 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:42.214969481 +0000 UTC m=+612.668590093,LastTimestamp:2026-03-08 22:07:42.214969481 +0000 UTC m=+612.668590093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:08:16.504858 master-0 kubenswrapper[7480]: I0308 22:08:16.504639 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:16.504858 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:16.504858 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:16.504858 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:16.504858 master-0 kubenswrapper[7480]: I0308 22:08:16.504788 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:17.505626 master-0 kubenswrapper[7480]: I0308 22:08:17.505530 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:17.505626 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:17.505626 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:17.505626 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:17.508001 master-0 kubenswrapper[7480]: I0308 22:08:17.507936 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:18.504440 master-0 kubenswrapper[7480]: I0308 22:08:18.504306 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:18.504440 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:18.504440 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:18.504440 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:18.504440 master-0 kubenswrapper[7480]: I0308 22:08:18.504419 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:18.855445 master-0 kubenswrapper[7480]: E0308 22:08:18.855268 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:19.503553 master-0 kubenswrapper[7480]: I0308 22:08:19.503387 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:19.503553 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:19.503553 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:19.503553 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:19.503553 master-0 kubenswrapper[7480]: I0308 22:08:19.503496 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:20.503798 master-0 kubenswrapper[7480]: I0308 22:08:20.503705 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:20.503798 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:20.503798 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:20.503798 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:20.504962 master-0 kubenswrapper[7480]: I0308 22:08:20.503797 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:21.504162 master-0 kubenswrapper[7480]: I0308 22:08:21.504027 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:21.504162 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:21.504162 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:21.504162 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:21.505486 master-0 kubenswrapper[7480]: I0308 22:08:21.504194 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:21.780879 master-0 kubenswrapper[7480]: I0308 22:08:21.780679 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 22:08:21.815988 master-0 kubenswrapper[7480]: I0308 22:08:21.815894 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:08:21.815988 master-0 kubenswrapper[7480]: I0308 22:08:21.815960 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:08:22.330799 master-0 kubenswrapper[7480]: E0308 22:08:22.330716 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 08 22:08:22.504324 master-0 kubenswrapper[7480]: I0308 22:08:22.504194 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:22.504324 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:22.504324 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:22.504324 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:22.505459 master-0 kubenswrapper[7480]: I0308 22:08:22.504344 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:23.503890 master-0 kubenswrapper[7480]: I0308 22:08:23.503814 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:23.503890 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:23.503890 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:23.503890 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:23.504413 master-0 kubenswrapper[7480]: I0308 22:08:23.503902 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:23.989630 master-0 kubenswrapper[7480]: I0308 22:08:23.989597 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_ee0b93ec-6ea0-4704-9449-57781a482ce4/installer/0.log" Mar 08 22:08:23.989934 master-0 kubenswrapper[7480]: I0308 22:08:23.989914 7480 generic.go:334] "Generic (PLEG): container finished" podID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerID="c38d9f8500098eb10c48b40a07d5d0aefa68c69ce87a29f847a74bc382b44913" exitCode=1 Mar 08 22:08:23.990039 master-0 kubenswrapper[7480]: I0308 22:08:23.989979 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"ee0b93ec-6ea0-4704-9449-57781a482ce4","Type":"ContainerDied","Data":"c38d9f8500098eb10c48b40a07d5d0aefa68c69ce87a29f847a74bc382b44913"} Mar 08 22:08:24.581568 master-0 kubenswrapper[7480]: I0308 22:08:24.581492 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:24.581568 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:24.581568 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:24.581568 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:24.582868 master-0 kubenswrapper[7480]: I0308 22:08:24.581611 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:25.353488 master-0 kubenswrapper[7480]: I0308 22:08:25.353402 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_ee0b93ec-6ea0-4704-9449-57781a482ce4/installer/0.log" Mar 08 22:08:25.353775 master-0 kubenswrapper[7480]: I0308 22:08:25.353519 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:08:25.504856 master-0 kubenswrapper[7480]: I0308 22:08:25.504772 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:25.504856 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:25.504856 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:25.504856 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:25.505250 master-0 kubenswrapper[7480]: I0308 22:08:25.504892 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:25.547350 master-0 kubenswrapper[7480]: I0308 22:08:25.547153 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-var-lock\") pod \"ee0b93ec-6ea0-4704-9449-57781a482ce4\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " Mar 08 22:08:25.547350 master-0 kubenswrapper[7480]: I0308 22:08:25.547272 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-kubelet-dir\") pod \"ee0b93ec-6ea0-4704-9449-57781a482ce4\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " Mar 08 22:08:25.547854 master-0 kubenswrapper[7480]: I0308 22:08:25.547374 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-var-lock" (OuterVolumeSpecName: "var-lock") pod "ee0b93ec-6ea0-4704-9449-57781a482ce4" (UID: "ee0b93ec-6ea0-4704-9449-57781a482ce4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:25.547854 master-0 kubenswrapper[7480]: I0308 22:08:25.547456 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0b93ec-6ea0-4704-9449-57781a482ce4-kube-api-access\") pod \"ee0b93ec-6ea0-4704-9449-57781a482ce4\" (UID: \"ee0b93ec-6ea0-4704-9449-57781a482ce4\") " Mar 08 22:08:25.547854 master-0 kubenswrapper[7480]: I0308 22:08:25.547463 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee0b93ec-6ea0-4704-9449-57781a482ce4" (UID: "ee0b93ec-6ea0-4704-9449-57781a482ce4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:08:25.548278 master-0 kubenswrapper[7480]: I0308 22:08:25.548225 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:25.548278 master-0 kubenswrapper[7480]: I0308 22:08:25.548272 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0b93ec-6ea0-4704-9449-57781a482ce4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:25.553441 master-0 kubenswrapper[7480]: I0308 22:08:25.553379 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0b93ec-6ea0-4704-9449-57781a482ce4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee0b93ec-6ea0-4704-9449-57781a482ce4" (UID: "ee0b93ec-6ea0-4704-9449-57781a482ce4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:08:25.650459 master-0 kubenswrapper[7480]: I0308 22:08:25.650387 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0b93ec-6ea0-4704-9449-57781a482ce4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:08:26.007539 master-0 kubenswrapper[7480]: I0308 22:08:26.007457 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_ee0b93ec-6ea0-4704-9449-57781a482ce4/installer/0.log" Mar 08 22:08:26.007902 master-0 kubenswrapper[7480]: I0308 22:08:26.007565 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"ee0b93ec-6ea0-4704-9449-57781a482ce4","Type":"ContainerDied","Data":"2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c"} Mar 08 22:08:26.007902 master-0 kubenswrapper[7480]: I0308 22:08:26.007610 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c" Mar 08 22:08:26.007902 master-0 kubenswrapper[7480]: I0308 22:08:26.007670 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:08:26.504851 master-0 kubenswrapper[7480]: I0308 22:08:26.504666 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:26.504851 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:26.504851 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:26.504851 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:26.505359 master-0 kubenswrapper[7480]: I0308 22:08:26.504910 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:27.505934 master-0 kubenswrapper[7480]: I0308 22:08:27.505305 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:27.505934 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:27.505934 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:27.505934 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:27.505934 master-0 kubenswrapper[7480]: I0308 22:08:27.505463 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:28.504518 master-0 kubenswrapper[7480]: I0308 22:08:28.504443 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:28.504518 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:28.504518 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:28.504518 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:28.505120 master-0 kubenswrapper[7480]: I0308 22:08:28.505063 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:28.856922 master-0 kubenswrapper[7480]: E0308 22:08:28.856757 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:29.503127 master-0 kubenswrapper[7480]: I0308 22:08:29.503002 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:29.503127 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:29.503127 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:29.503127 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:29.503793 master-0 kubenswrapper[7480]: I0308 22:08:29.503164 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:30.504762 master-0 kubenswrapper[7480]: I0308 22:08:30.504651 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:30.504762 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:30.504762 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:30.504762 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:30.506130 master-0 kubenswrapper[7480]: I0308 22:08:30.504795 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:31.503476 master-0 kubenswrapper[7480]: I0308 22:08:31.503304 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:31.503476 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:31.503476 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:31.503476 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:31.503793 master-0 kubenswrapper[7480]: I0308 22:08:31.503440 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:32.331601 master-0 kubenswrapper[7480]: E0308 22:08:32.331512 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:32.504762 master-0 kubenswrapper[7480]: I0308 22:08:32.504659 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:32.504762 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:32.504762 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:32.504762 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:32.504762 master-0 kubenswrapper[7480]: I0308 22:08:32.504758 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:33.505400 master-0 kubenswrapper[7480]: I0308 22:08:33.505290 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:33.505400 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:33.505400 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:33.505400 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:33.506696 master-0 kubenswrapper[7480]: I0308 22:08:33.505466 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:34.504665 master-0 kubenswrapper[7480]: I0308 22:08:34.504525 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:34.504665 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:34.504665 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:34.504665 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:34.504665 master-0 kubenswrapper[7480]: I0308 22:08:34.504666 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:35.504918 master-0 kubenswrapper[7480]: I0308 22:08:35.504804 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:35.504918 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:35.504918 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:35.504918 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:35.506153 master-0 kubenswrapper[7480]: I0308 22:08:35.504953 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:36.503748 master-0 kubenswrapper[7480]: I0308 22:08:36.503633 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:36.503748 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:36.503748 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:36.503748 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:36.503748 master-0 kubenswrapper[7480]: I0308 22:08:36.503742 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:37.504664 master-0 kubenswrapper[7480]: I0308 22:08:37.504483 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:37.504664 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:37.504664 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:37.504664 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:37.505770 master-0 kubenswrapper[7480]: I0308 22:08:37.504782 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:38.505015 master-0 kubenswrapper[7480]: I0308 22:08:38.504902 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:38.505015 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:38.505015 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:38.505015 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:38.506001 master-0 kubenswrapper[7480]: I0308 22:08:38.505012 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:38.857637 master-0 kubenswrapper[7480]: E0308 22:08:38.857516 7480 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:38.857637 master-0 kubenswrapper[7480]: I0308 22:08:38.857603 7480 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 22:08:39.127956 master-0 kubenswrapper[7480]: I0308 22:08:39.127745 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/1.log" Mar 08 22:08:39.128502 master-0 kubenswrapper[7480]: I0308 22:08:39.128445 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/0.log" Mar 08 22:08:39.129412 master-0 kubenswrapper[7480]: I0308 22:08:39.129337 7480 generic.go:334] "Generic (PLEG): container finished" podID="dfe625a1-5ba4-491f-9ab3-5d91154961a0" containerID="6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3" exitCode=1 Mar 08 22:08:39.129509 master-0 kubenswrapper[7480]: I0308 22:08:39.129410 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerDied","Data":"6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3"} Mar 08 22:08:39.129593 master-0 kubenswrapper[7480]: I0308 22:08:39.129513 7480 scope.go:117] "RemoveContainer" containerID="73a8f9d32fb6d4973561166a1225ead4683b3110d97d82f0bed60b3b5a68361b" Mar 08 22:08:39.130637 master-0 kubenswrapper[7480]: I0308 22:08:39.130580 7480 scope.go:117] "RemoveContainer" containerID="6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3" Mar 08 22:08:39.131031 master-0 kubenswrapper[7480]: E0308 22:08:39.130924 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-trhtl_openshift-network-node-identity(dfe625a1-5ba4-491f-9ab3-5d91154961a0)\"" pod="openshift-network-node-identity/network-node-identity-trhtl" podUID="dfe625a1-5ba4-491f-9ab3-5d91154961a0" Mar 08 22:08:39.504522 master-0 kubenswrapper[7480]: I0308 22:08:39.504302 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:39.504522 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:39.504522 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:39.504522 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:39.504522 master-0 kubenswrapper[7480]: I0308 22:08:39.504410 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:40.141207 master-0 kubenswrapper[7480]: I0308 22:08:40.141117 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/1.log" Mar 08 22:08:40.503779 master-0 kubenswrapper[7480]: I0308 22:08:40.503558 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:40.503779 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:40.503779 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:40.503779 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:40.503779 master-0 kubenswrapper[7480]: I0308 22:08:40.503694 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:41.504385 master-0 kubenswrapper[7480]: I0308 22:08:41.504287 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:41.504385 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:41.504385 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:41.504385 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:41.504385 master-0 kubenswrapper[7480]: I0308 22:08:41.504382 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:42.332673 master-0 kubenswrapper[7480]: E0308 22:08:42.332591 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:08:42.333023 master-0 kubenswrapper[7480]: E0308 22:08:42.333007 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:08:42.503864 master-0 kubenswrapper[7480]: I0308 22:08:42.503784 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:42.503864 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:42.503864 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:42.503864 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:42.504484 master-0 kubenswrapper[7480]: I0308 22:08:42.503899 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:43.504677 master-0 kubenswrapper[7480]: I0308 22:08:43.504610 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:43.504677 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:43.504677 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:43.504677 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:43.505838 master-0 kubenswrapper[7480]: I0308 22:08:43.505415 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:44.504885 master-0 kubenswrapper[7480]: I0308 22:08:44.504763 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:44.504885 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:44.504885 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:44.504885 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:44.506041 master-0 kubenswrapper[7480]: I0308 22:08:44.504925 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:45.504735 master-0 kubenswrapper[7480]: I0308 22:08:45.504632 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:45.504735 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:45.504735 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:45.504735 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:45.505695 master-0 kubenswrapper[7480]: I0308 22:08:45.504753 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:46.503852 master-0 kubenswrapper[7480]: I0308 22:08:46.503772 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:46.503852 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:46.503852 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:46.503852 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:46.504284 master-0 kubenswrapper[7480]: I0308 22:08:46.503865 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:47.503702 master-0 kubenswrapper[7480]: I0308 22:08:47.503558 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:47.503702 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:47.503702 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:47.503702 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:47.505210 master-0 kubenswrapper[7480]: I0308 22:08:47.503713 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:48.503222 master-0 kubenswrapper[7480]: I0308 22:08:48.503131 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:48.503222 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:48.503222 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:48.503222 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:48.503624 master-0 kubenswrapper[7480]: I0308 22:08:48.503222 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:48.858085 master-0 kubenswrapper[7480]: E0308 22:08:48.857967 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 08 22:08:49.505039 master-0 kubenswrapper[7480]: I0308 22:08:49.504929 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:49.505039 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:49.505039 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:49.505039 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:49.505440 master-0 kubenswrapper[7480]: I0308 22:08:49.505098 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:50.239827 master-0 kubenswrapper[7480]: E0308 22:08:50.239576 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-xlrwk.189afd12478436e3 openshift-multus 11896 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-xlrwk,UID:7147d808-f9a2-434c-ae54-77d82a3d2e1f,APIVersion:v1,ResourceVersion:11672,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:38 +0000 UTC,LastTimestamp:2026-03-08 22:07:48.272388472 +0000 UTC m=+618.726009074,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:08:50.504125 master-0 kubenswrapper[7480]: I0308 22:08:50.503875 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:50.504125 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:50.504125 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:50.504125 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:50.504125 master-0 kubenswrapper[7480]: I0308 22:08:50.503998 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:50.782138 master-0 kubenswrapper[7480]: I0308 22:08:50.781910 7480 scope.go:117] "RemoveContainer" containerID="6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3" Mar 08 22:08:51.237820 master-0 kubenswrapper[7480]: I0308 22:08:51.237711 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/1.log" Mar 08 22:08:51.238709 master-0 kubenswrapper[7480]: I0308 22:08:51.238616 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"e78de91412bc1e77f8bd1aa7528f80d543f00633d1f8f9abc82a7124a38b7306"} Mar 08 22:08:51.503933 master-0 kubenswrapper[7480]: I0308 22:08:51.503836 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:51.503933 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:51.503933 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:51.503933 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:51.505306 master-0 kubenswrapper[7480]: I0308 22:08:51.503947 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:52.252063 master-0 kubenswrapper[7480]: I0308 22:08:52.251956 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/3.log" Mar 08 22:08:52.253413 master-0 kubenswrapper[7480]: I0308 22:08:52.253285 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/2.log" Mar 08 22:08:52.254359 master-0 kubenswrapper[7480]: I0308 22:08:52.254171 7480 generic.go:334] "Generic (PLEG): container finished" podID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" exitCode=1 Mar 08 22:08:52.254359 master-0 kubenswrapper[7480]: I0308 22:08:52.254248 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerDied","Data":"11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5"} Mar 08 22:08:52.254359 master-0 kubenswrapper[7480]: I0308 22:08:52.254325 7480 scope.go:117] "RemoveContainer" containerID="1f4a62d722d99fc6a3743dcd20f8ccf06ee8ac82957a3628d0186bea1711ac1c" Mar 08 22:08:52.256138 master-0 kubenswrapper[7480]: I0308 22:08:52.256006 7480 scope.go:117] "RemoveContainer" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" Mar 08 22:08:52.257437 master-0 kubenswrapper[7480]: E0308 22:08:52.257189 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:08:52.503973 master-0 kubenswrapper[7480]: I0308 22:08:52.503757 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:52.503973 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:52.503973 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:52.503973 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:52.503973 master-0 kubenswrapper[7480]: I0308 22:08:52.503866 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:52.587857 master-0 kubenswrapper[7480]: I0308 22:08:52.587765 7480 status_manager.go:851] "Failed to get status for pod" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods router-default-79f8cd6fdd-4fsdl)" Mar 08 22:08:53.267807 master-0 kubenswrapper[7480]: I0308 22:08:53.267727 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/3.log" Mar 08 22:08:53.504848 master-0 kubenswrapper[7480]: I0308 22:08:53.504732 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:53.504848 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:53.504848 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:53.504848 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:53.506027 master-0 kubenswrapper[7480]: I0308 22:08:53.504859 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:54.503064 master-0 kubenswrapper[7480]: I0308 22:08:54.502982 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:54.503064 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:54.503064 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:54.503064 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:54.503549 master-0 kubenswrapper[7480]: I0308 22:08:54.503110 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:55.504745 master-0 kubenswrapper[7480]: I0308 22:08:55.504606 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:55.504745 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:55.504745 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:55.504745 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:55.505861 master-0 kubenswrapper[7480]: I0308 22:08:55.504742 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:55.819631 master-0 kubenswrapper[7480]: E0308 22:08:55.819512 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 22:08:55.820421 master-0 kubenswrapper[7480]: I0308 22:08:55.820375 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 22:08:55.851550 master-0 kubenswrapper[7480]: W0308 22:08:55.851403 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0 WatchSource:0}: Error finding container 7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0: Status 404 returned error can't find the container with id 7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0 Mar 08 22:08:56.301554 master-0 kubenswrapper[7480]: I0308 22:08:56.301487 7480 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="9b3f703e2b5dc4f53836c052b0708a079abf7ba89e449465ae68fb01236cf52d" exitCode=0 Mar 08 22:08:56.301836 master-0 kubenswrapper[7480]: I0308 22:08:56.301777 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"9b3f703e2b5dc4f53836c052b0708a079abf7ba89e449465ae68fb01236cf52d"} Mar 08 22:08:56.302022 master-0 kubenswrapper[7480]: I0308 22:08:56.301994 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0"} Mar 08 22:08:56.302730 master-0 kubenswrapper[7480]: I0308 22:08:56.302638 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:08:56.302730 master-0 kubenswrapper[7480]: I0308 22:08:56.302671 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:08:56.504280 master-0 kubenswrapper[7480]: I0308 22:08:56.504109 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:56.504280 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:56.504280 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:56.504280 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:56.504280 master-0 kubenswrapper[7480]: I0308 22:08:56.504176 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:57.504470 master-0 kubenswrapper[7480]: I0308 22:08:57.504386 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:57.504470 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:57.504470 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:57.504470 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:57.505626 master-0 kubenswrapper[7480]: I0308 22:08:57.504491 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:58.504572 master-0 kubenswrapper[7480]: I0308 22:08:58.504456 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:58.504572 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:58.504572 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:58.504572 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:58.505703 master-0 kubenswrapper[7480]: I0308 22:08:58.504574 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:08:59.058830 master-0 kubenswrapper[7480]: E0308 22:08:59.058538 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 08 22:08:59.504005 master-0 kubenswrapper[7480]: I0308 22:08:59.503910 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:08:59.504005 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:08:59.504005 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:08:59.504005 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:08:59.504580 master-0 kubenswrapper[7480]: I0308 22:08:59.504109 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:00.505100 master-0 kubenswrapper[7480]: I0308 22:09:00.504983 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:00.505100 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:00.505100 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:00.505100 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:00.505952 master-0 kubenswrapper[7480]: I0308 22:09:00.505137 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:01.502564 master-0 kubenswrapper[7480]: I0308 22:09:01.502459 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:01.502564 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:01.502564 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:01.502564 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:01.502564 master-0 kubenswrapper[7480]: I0308 22:09:01.502542 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:02.504991 master-0 kubenswrapper[7480]: I0308 22:09:02.504805 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:02.504991 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:02.504991 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:02.504991 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:02.504991 master-0 kubenswrapper[7480]: I0308 22:09:02.504948 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:02.687583 master-0 kubenswrapper[7480]: E0308 22:09:02.687114 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:08:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:08:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:08:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:08:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:09:02.781503 master-0 kubenswrapper[7480]: I0308 22:09:02.781284 7480 scope.go:117] "RemoveContainer" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" Mar 08 22:09:02.781824 master-0 kubenswrapper[7480]: E0308 22:09:02.781765 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:09:03.504461 master-0 kubenswrapper[7480]: I0308 22:09:03.504376 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:03.504461 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:03.504461 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:03.504461 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:03.505162 master-0 kubenswrapper[7480]: I0308 22:09:03.504485 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:04.504925 master-0 kubenswrapper[7480]: I0308 22:09:04.504746 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:04.504925 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:04.504925 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:04.504925 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:04.504925 master-0 kubenswrapper[7480]: I0308 22:09:04.504856 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:05.504046 master-0 kubenswrapper[7480]: I0308 22:09:05.503878 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:05.504046 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:05.504046 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:05.504046 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:05.504629 master-0 kubenswrapper[7480]: I0308 22:09:05.504053 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:06.503966 master-0 kubenswrapper[7480]: I0308 22:09:06.503843 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:06.503966 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:06.503966 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:06.503966 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:06.505169 master-0 kubenswrapper[7480]: I0308 22:09:06.504008 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:07.504968 master-0 kubenswrapper[7480]: I0308 22:09:07.504892 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:07.504968 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:07.504968 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:07.504968 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:07.506165 master-0 kubenswrapper[7480]: I0308 22:09:07.506115 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:08.504802 master-0 kubenswrapper[7480]: I0308 22:09:08.504627 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:08.504802 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:08.504802 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:08.504802 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:08.505776 master-0 kubenswrapper[7480]: I0308 22:09:08.504826 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:09.460412 master-0 kubenswrapper[7480]: E0308 22:09:09.460333 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 08 22:09:09.503551 master-0 kubenswrapper[7480]: I0308 22:09:09.503455 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:09.503551 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:09.503551 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:09.503551 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:09.503894 master-0 kubenswrapper[7480]: I0308 22:09:09.503584 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:10.504737 master-0 kubenswrapper[7480]: I0308 22:09:10.504610 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:10.504737 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:10.504737 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:10.504737 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:10.506201 master-0 kubenswrapper[7480]: I0308 22:09:10.504777 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:11.503394 master-0 kubenswrapper[7480]: I0308 22:09:11.503208 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:11.503394 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:11.503394 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:11.503394 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:11.503808 master-0 kubenswrapper[7480]: I0308 22:09:11.503416 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:12.504555 master-0 kubenswrapper[7480]: I0308 22:09:12.504487 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:12.504555 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:12.504555 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:12.504555 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:12.506132 master-0 kubenswrapper[7480]: I0308 22:09:12.506056 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:12.688039 master-0 kubenswrapper[7480]: E0308 22:09:12.687930 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:09:13.504143 master-0 kubenswrapper[7480]: I0308 22:09:13.503972 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:13.504143 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:13.504143 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:13.504143 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:13.504648 master-0 kubenswrapper[7480]: I0308 22:09:13.504148 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:14.504377 master-0 kubenswrapper[7480]: I0308 22:09:14.504255 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:14.504377 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:14.504377 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:14.504377 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:14.504377 master-0 kubenswrapper[7480]: I0308 22:09:14.504360 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:15.505066 master-0 kubenswrapper[7480]: I0308 22:09:15.504829 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:15.505066 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:15.505066 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:15.505066 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:15.505066 master-0 kubenswrapper[7480]: I0308 22:09:15.504963 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:16.503636 master-0 kubenswrapper[7480]: I0308 22:09:16.503504 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:16.503636 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:16.503636 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:16.503636 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:16.504112 master-0 kubenswrapper[7480]: I0308 22:09:16.503662 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:17.505048 master-0 kubenswrapper[7480]: I0308 22:09:17.504978 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:17.505048 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:17.505048 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:17.505048 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:17.506157 master-0 kubenswrapper[7480]: I0308 22:09:17.505062 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:17.782297 master-0 kubenswrapper[7480]: I0308 22:09:17.782040 7480 scope.go:117] "RemoveContainer" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" Mar 08 22:09:17.782762 master-0 kubenswrapper[7480]: E0308 22:09:17.782685 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:09:18.504009 master-0 kubenswrapper[7480]: I0308 22:09:18.503887 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:18.504009 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:18.504009 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:18.504009 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:18.504555 master-0 kubenswrapper[7480]: I0308 22:09:18.504054 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:19.505342 master-0 kubenswrapper[7480]: I0308 22:09:19.505213 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:19.505342 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:19.505342 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:19.505342 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:19.507043 master-0 kubenswrapper[7480]: I0308 22:09:19.505359 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:20.263063 master-0 kubenswrapper[7480]: E0308 22:09:20.262952 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 08 22:09:20.507008 master-0 kubenswrapper[7480]: I0308 22:09:20.506708 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:20.507008 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:20.507008 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:20.507008 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:20.507008 master-0 kubenswrapper[7480]: I0308 22:09:20.506875 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:21.504009 master-0 kubenswrapper[7480]: I0308 22:09:21.503930 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:21.504009 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:21.504009 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:21.504009 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:21.504641 master-0 kubenswrapper[7480]: I0308 22:09:21.504012 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:22.504480 master-0 kubenswrapper[7480]: I0308 22:09:22.504345 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:22.504480 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:22.504480 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:22.504480 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:22.505609 master-0 kubenswrapper[7480]: I0308 22:09:22.504579 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:22.689029 master-0 kubenswrapper[7480]: E0308 22:09:22.688911 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:09:23.504732 master-0 kubenswrapper[7480]: I0308 22:09:23.504554 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:23.504732 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:23.504732 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:23.504732 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:23.504732 master-0 kubenswrapper[7480]: I0308 22:09:23.504681 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:24.243704 master-0 kubenswrapper[7480]: E0308 22:09:24.243412 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-79f8cd6fdd-4fsdl.189afcee818c3411 openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-4fsdl,UID:81f5ed55-225c-41e2-bc9d-b41063a604c9,APIVersion:v1,ResourceVersion:10172,FieldPath:spec.containers{router},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:05:04.629576721 +0000 UTC m=+455.083197363,LastTimestamp:2026-03-08 22:07:51.642558588 +0000 UTC m=+622.096179220,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:09:24.504979 master-0 kubenswrapper[7480]: I0308 22:09:24.504654 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:24.504979 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:24.504979 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:24.504979 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:24.504979 master-0 kubenswrapper[7480]: I0308 22:09:24.504784 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:25.504690 master-0 kubenswrapper[7480]: I0308 22:09:25.504575 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:25.504690 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:25.504690 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:25.504690 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:25.505971 master-0 kubenswrapper[7480]: I0308 22:09:25.504720 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:26.504108 master-0 kubenswrapper[7480]: I0308 22:09:26.503973 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:26.504108 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:26.504108 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:26.504108 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:26.504780 master-0 kubenswrapper[7480]: I0308 22:09:26.504131 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:27.503784 master-0 kubenswrapper[7480]: I0308 22:09:27.503671 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:27.503784 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:27.503784 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:27.503784 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:27.504881 master-0 kubenswrapper[7480]: I0308 22:09:27.503792 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:28.504011 master-0 kubenswrapper[7480]: I0308 22:09:28.503929 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:28.504011 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:28.504011 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:28.504011 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:28.505100 master-0 kubenswrapper[7480]: I0308 22:09:28.504019 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:28.788820 master-0 kubenswrapper[7480]: I0308 22:09:28.788618 7480 scope.go:117] "RemoveContainer" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" Mar 08 22:09:28.789194 master-0 kubenswrapper[7480]: E0308 22:09:28.789120 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:09:29.504061 master-0 kubenswrapper[7480]: I0308 22:09:29.503992 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:29.504061 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:29.504061 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:29.504061 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:29.504770 master-0 kubenswrapper[7480]: I0308 22:09:29.504111 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:30.306648 master-0 kubenswrapper[7480]: E0308 22:09:30.306468 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 22:09:30.504855 master-0 kubenswrapper[7480]: I0308 22:09:30.504753 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:30.504855 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:30.504855 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:30.504855 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:30.515957 master-0 kubenswrapper[7480]: I0308 22:09:30.504859 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:30.591100 master-0 kubenswrapper[7480]: I0308 22:09:30.591009 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/config-sync-controllers/0.log" Mar 08 22:09:30.592043 master-0 kubenswrapper[7480]: I0308 22:09:30.591983 7480 generic.go:334] "Generic (PLEG): container finished" podID="d063b330-4180-43de-a248-c573183d96f1" containerID="f35f20071c5b0df4134c3bd22227a8034ca2417ef7250451b3ec29b800fa74dc" exitCode=1 Mar 08 22:09:30.592351 master-0 kubenswrapper[7480]: I0308 22:09:30.592117 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerDied","Data":"f35f20071c5b0df4134c3bd22227a8034ca2417ef7250451b3ec29b800fa74dc"} Mar 08 22:09:30.593445 master-0 kubenswrapper[7480]: I0308 22:09:30.593410 7480 scope.go:117] "RemoveContainer" containerID="f35f20071c5b0df4134c3bd22227a8034ca2417ef7250451b3ec29b800fa74dc" Mar 08 22:09:31.502699 master-0 kubenswrapper[7480]: I0308 22:09:31.502607 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:31.502699 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:31.502699 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:31.502699 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:31.503233 master-0 kubenswrapper[7480]: I0308 22:09:31.502705 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:31.602105 master-0 kubenswrapper[7480]: I0308 22:09:31.601997 7480 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="048081af0d4f2d7c89ebdb9c25d0b6b144830ec123396e7ecad6567e008c8334" exitCode=0 Mar 08 22:09:31.602105 master-0 kubenswrapper[7480]: I0308 22:09:31.602057 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"048081af0d4f2d7c89ebdb9c25d0b6b144830ec123396e7ecad6567e008c8334"} Mar 08 22:09:31.602991 master-0 kubenswrapper[7480]: I0308 22:09:31.602562 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:09:31.602991 master-0 kubenswrapper[7480]: I0308 22:09:31.602599 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:09:31.605986 master-0 kubenswrapper[7480]: I0308 22:09:31.605931 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/config-sync-controllers/0.log" Mar 08 22:09:31.606541 master-0 kubenswrapper[7480]: I0308 22:09:31.606502 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"937b674da18ffd00f3060b7c8bedea19980a79bcc897766e82761f716314d591"} Mar 08 22:09:31.864789 master-0 kubenswrapper[7480]: E0308 22:09:31.864674 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 08 22:09:32.504281 master-0 kubenswrapper[7480]: I0308 22:09:32.504134 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:32.504281 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:32.504281 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:32.504281 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:32.505051 master-0 kubenswrapper[7480]: I0308 22:09:32.504312 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:32.690504 master-0 kubenswrapper[7480]: E0308 22:09:32.690140 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:09:33.504697 master-0 kubenswrapper[7480]: I0308 22:09:33.504588 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:33.504697 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:33.504697 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:33.504697 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:33.504697 master-0 kubenswrapper[7480]: I0308 22:09:33.504685 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:33.626189 master-0 kubenswrapper[7480]: I0308 22:09:33.625839 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/1.log" Mar 08 22:09:33.626734 master-0 kubenswrapper[7480]: I0308 22:09:33.626649 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/0.log" Mar 08 22:09:33.626734 master-0 kubenswrapper[7480]: I0308 22:09:33.626686 7480 generic.go:334] "Generic (PLEG): container finished" podID="c901b468-b8e9-48f8-8050-0d54e24e2adb" containerID="2bcf2f4522ec1e98454f0d3a88ae01a27705138b2f5fbbd08bc581f106c16a5d" exitCode=1 Mar 08 22:09:33.626734 master-0 kubenswrapper[7480]: I0308 22:09:33.626716 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerDied","Data":"2bcf2f4522ec1e98454f0d3a88ae01a27705138b2f5fbbd08bc581f106c16a5d"} Mar 08 22:09:33.627005 master-0 kubenswrapper[7480]: I0308 22:09:33.626767 7480 scope.go:117] "RemoveContainer" containerID="975d86808356450f32e152ee3c49e6ab2d8f04281755488f22f0b7506389bb2d" Mar 08 22:09:33.627321 master-0 kubenswrapper[7480]: I0308 22:09:33.627257 7480 scope.go:117] "RemoveContainer" containerID="2bcf2f4522ec1e98454f0d3a88ae01a27705138b2f5fbbd08bc581f106c16a5d" Mar 08 22:09:33.627639 master-0 kubenswrapper[7480]: E0308 22:09:33.627571 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:09:34.503840 master-0 kubenswrapper[7480]: I0308 22:09:34.503769 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:34.503840 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:34.503840 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:34.503840 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:34.504913 master-0 kubenswrapper[7480]: I0308 22:09:34.503862 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:34.639038 master-0 kubenswrapper[7480]: I0308 22:09:34.638772 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/1.log" Mar 08 22:09:35.503489 master-0 kubenswrapper[7480]: I0308 22:09:35.503415 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:35.503489 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:35.503489 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:35.503489 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:35.504594 master-0 kubenswrapper[7480]: I0308 22:09:35.503520 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:36.504510 master-0 kubenswrapper[7480]: I0308 22:09:36.504424 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:36.504510 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:36.504510 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:36.504510 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:36.505345 master-0 kubenswrapper[7480]: I0308 22:09:36.504541 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:37.504330 master-0 kubenswrapper[7480]: I0308 22:09:37.504221 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:37.504330 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:37.504330 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:37.504330 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:37.505429 master-0 kubenswrapper[7480]: I0308 22:09:37.504358 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:38.503630 master-0 kubenswrapper[7480]: I0308 22:09:38.503545 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:38.503630 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:38.503630 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:38.503630 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:38.504495 master-0 kubenswrapper[7480]: I0308 22:09:38.503651 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:39.504017 master-0 kubenswrapper[7480]: I0308 22:09:39.503936 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:39.504017 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:39.504017 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:39.504017 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:39.505062 master-0 kubenswrapper[7480]: I0308 22:09:39.504046 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:40.504434 master-0 kubenswrapper[7480]: I0308 22:09:40.504338 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:40.504434 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:40.504434 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:40.504434 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:40.505034 master-0 kubenswrapper[7480]: I0308 22:09:40.504461 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:41.503694 master-0 kubenswrapper[7480]: I0308 22:09:41.503573 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:41.503694 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:41.503694 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:41.503694 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:41.503694 master-0 kubenswrapper[7480]: I0308 22:09:41.503694 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:41.781727 master-0 kubenswrapper[7480]: I0308 22:09:41.781534 7480 scope.go:117] "RemoveContainer" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" Mar 08 22:09:42.505023 master-0 kubenswrapper[7480]: I0308 22:09:42.504940 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:42.505023 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:42.505023 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:42.505023 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:42.505450 master-0 kubenswrapper[7480]: I0308 22:09:42.505042 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:42.691638 master-0 kubenswrapper[7480]: E0308 22:09:42.691527 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:09:42.691638 master-0 kubenswrapper[7480]: E0308 22:09:42.691600 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:09:42.714554 master-0 kubenswrapper[7480]: I0308 22:09:42.714474 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/3.log" Mar 08 22:09:42.715407 master-0 kubenswrapper[7480]: I0308 22:09:42.715310 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18"} Mar 08 22:09:43.504107 master-0 kubenswrapper[7480]: I0308 22:09:43.503993 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:43.504107 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:43.504107 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:43.504107 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:43.505543 master-0 kubenswrapper[7480]: I0308 22:09:43.504171 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:44.504825 master-0 kubenswrapper[7480]: I0308 22:09:44.504657 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:44.504825 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:44.504825 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:44.504825 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:44.506449 master-0 kubenswrapper[7480]: I0308 22:09:44.506275 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:45.067060 master-0 kubenswrapper[7480]: E0308 22:09:45.066925 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 08 22:09:45.503837 master-0 kubenswrapper[7480]: I0308 22:09:45.503629 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:45.503837 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:45.503837 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:45.503837 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:45.503837 master-0 kubenswrapper[7480]: I0308 22:09:45.503739 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:45.781931 master-0 kubenswrapper[7480]: I0308 22:09:45.781528 7480 scope.go:117] "RemoveContainer" containerID="2bcf2f4522ec1e98454f0d3a88ae01a27705138b2f5fbbd08bc581f106c16a5d" Mar 08 22:09:46.504425 master-0 kubenswrapper[7480]: I0308 22:09:46.504289 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:46.504425 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:46.504425 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:46.504425 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:46.504425 master-0 kubenswrapper[7480]: I0308 22:09:46.504403 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:46.635954 master-0 kubenswrapper[7480]: I0308 22:09:46.635866 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:09:46.636229 master-0 kubenswrapper[7480]: I0308 22:09:46.635966 7480 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:09:46.636229 master-0 kubenswrapper[7480]: I0308 22:09:46.636131 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:09:46.636393 master-0 kubenswrapper[7480]: I0308 22:09:46.636240 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:09:46.764225 master-0 kubenswrapper[7480]: I0308 22:09:46.764014 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/1.log" Mar 08 22:09:46.765252 master-0 kubenswrapper[7480]: I0308 22:09:46.765199 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/0.log" Mar 08 22:09:46.765896 master-0 kubenswrapper[7480]: I0308 22:09:46.765809 7480 generic.go:334] "Generic (PLEG): container finished" podID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerID="bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb" exitCode=1 Mar 08 22:09:46.765896 master-0 kubenswrapper[7480]: I0308 22:09:46.765849 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerDied","Data":"bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb"} Mar 08 22:09:46.766170 master-0 kubenswrapper[7480]: I0308 22:09:46.765921 7480 scope.go:117] "RemoveContainer" containerID="69b4132a818df716de03fdd12ebf683c551197394c831d762cb2338396e793c4" Mar 08 22:09:46.767290 master-0 kubenswrapper[7480]: I0308 22:09:46.767220 7480 scope.go:117] "RemoveContainer" containerID="bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb" Mar 08 22:09:46.767667 master-0 kubenswrapper[7480]: E0308 22:09:46.767604 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-qv4bv_openshift-catalogd(2a91f36f-900e-4b99-9be1-dfc61d8e31d9)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" Mar 08 22:09:46.770152 master-0 kubenswrapper[7480]: I0308 22:09:46.770058 7480 generic.go:334] "Generic (PLEG): container finished" podID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerID="852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e" exitCode=0 Mar 08 22:09:46.770250 master-0 kubenswrapper[7480]: I0308 22:09:46.770203 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerDied","Data":"852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e"} Mar 08 22:09:46.771052 master-0 kubenswrapper[7480]: I0308 22:09:46.770993 7480 scope.go:117] "RemoveContainer" containerID="852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e" Mar 08 22:09:46.771555 master-0 kubenswrapper[7480]: E0308 22:09:46.771480 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-5ljhh_openshift-marketplace(7e0267ba-5dd7-4e81-885f-95b27a7b42ea)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" Mar 08 22:09:46.774471 master-0 kubenswrapper[7480]: I0308 22:09:46.774418 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/1.log" Mar 08 22:09:46.774605 master-0 kubenswrapper[7480]: I0308 22:09:46.774494 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109"} Mar 08 22:09:46.796919 master-0 kubenswrapper[7480]: I0308 22:09:46.796817 7480 scope.go:117] "RemoveContainer" containerID="c7c62eecaac8f5df8b2da98122fad8c96cfc54251fbf2aa75a9ba067018db826" Mar 08 22:09:47.503850 master-0 kubenswrapper[7480]: I0308 22:09:47.503694 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:47.503850 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:47.503850 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:47.503850 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:47.504545 master-0 kubenswrapper[7480]: I0308 22:09:47.504488 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:47.786763 master-0 kubenswrapper[7480]: I0308 22:09:47.786604 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/1.log" Mar 08 22:09:47.791504 master-0 kubenswrapper[7480]: I0308 22:09:47.791423 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/config-sync-controllers/0.log" Mar 08 22:09:47.792563 master-0 kubenswrapper[7480]: I0308 22:09:47.792502 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/cluster-cloud-controller-manager/0.log" Mar 08 22:09:47.792729 master-0 kubenswrapper[7480]: I0308 22:09:47.792595 7480 generic.go:334] "Generic (PLEG): container finished" podID="d063b330-4180-43de-a248-c573183d96f1" containerID="6db16eaa3133d25587d14c0b9e526e3d55af3b3bbd2fa785bac1c1b404fb50fd" exitCode=1 Mar 08 22:09:47.794400 master-0 kubenswrapper[7480]: I0308 22:09:47.794316 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerDied","Data":"6db16eaa3133d25587d14c0b9e526e3d55af3b3bbd2fa785bac1c1b404fb50fd"} Mar 08 22:09:47.795242 master-0 kubenswrapper[7480]: I0308 22:09:47.795203 7480 scope.go:117] "RemoveContainer" containerID="6db16eaa3133d25587d14c0b9e526e3d55af3b3bbd2fa785bac1c1b404fb50fd" Mar 08 22:09:48.504878 master-0 kubenswrapper[7480]: I0308 22:09:48.504804 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:48.504878 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:48.504878 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:48.504878 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:48.505991 master-0 kubenswrapper[7480]: I0308 22:09:48.504905 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:48.808014 master-0 kubenswrapper[7480]: I0308 22:09:48.807819 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/config-sync-controllers/0.log" Mar 08 22:09:48.808406 master-0 kubenswrapper[7480]: I0308 22:09:48.808358 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/cluster-cloud-controller-manager/0.log" Mar 08 22:09:48.808508 master-0 kubenswrapper[7480]: I0308 22:09:48.808432 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"7a5964149940bfe02b13e1629eac187329873cf8b67f50fef511754fdef9ba33"} Mar 08 22:09:49.503688 master-0 kubenswrapper[7480]: I0308 22:09:49.503578 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:49.503688 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:49.503688 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:49.503688 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:49.504344 master-0 kubenswrapper[7480]: I0308 22:09:49.503701 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:50.505104 master-0 kubenswrapper[7480]: I0308 22:09:50.504937 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:50.505104 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:50.505104 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:50.505104 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:50.506117 master-0 kubenswrapper[7480]: I0308 22:09:50.505128 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:50.827514 master-0 kubenswrapper[7480]: I0308 22:09:50.827402 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/1.log" Mar 08 22:09:50.829275 master-0 kubenswrapper[7480]: I0308 22:09:50.829229 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/0.log" Mar 08 22:09:50.829409 master-0 kubenswrapper[7480]: I0308 22:09:50.829303 7480 generic.go:334] "Generic (PLEG): container finished" podID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerID="a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321" exitCode=1 Mar 08 22:09:50.829409 master-0 kubenswrapper[7480]: I0308 22:09:50.829352 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerDied","Data":"a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321"} Mar 08 22:09:50.829551 master-0 kubenswrapper[7480]: I0308 22:09:50.829410 7480 scope.go:117] "RemoveContainer" containerID="5946b7f2d9d566068ae07c485f39d2cd8eea56a2d551b41eae667da0ce359cfb" Mar 08 22:09:50.830633 master-0 kubenswrapper[7480]: I0308 22:09:50.830556 7480 scope.go:117] "RemoveContainer" containerID="a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321" Mar 08 22:09:50.831140 master-0 kubenswrapper[7480]: E0308 22:09:50.831036 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-nk294_openshift-operator-controller(077643a2-ab2d-4f12-9abf-42a1c56d7aff)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" Mar 08 22:09:51.497920 master-0 kubenswrapper[7480]: I0308 22:09:51.497788 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:09:51.498774 master-0 kubenswrapper[7480]: I0308 22:09:51.498718 7480 scope.go:117] "RemoveContainer" containerID="bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb" Mar 08 22:09:51.499135 master-0 kubenswrapper[7480]: E0308 22:09:51.499039 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-qv4bv_openshift-catalogd(2a91f36f-900e-4b99-9be1-dfc61d8e31d9)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" podUID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" Mar 08 22:09:51.503872 master-0 kubenswrapper[7480]: I0308 22:09:51.503770 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:51.503872 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:51.503872 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:51.503872 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:51.504255 master-0 kubenswrapper[7480]: I0308 22:09:51.503927 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:51.838785 master-0 kubenswrapper[7480]: I0308 22:09:51.838684 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/1.log" Mar 08 22:09:51.859416 master-0 kubenswrapper[7480]: I0308 22:09:51.859330 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:09:51.860371 master-0 kubenswrapper[7480]: I0308 22:09:51.860317 7480 scope.go:117] "RemoveContainer" containerID="a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321" Mar 08 22:09:51.860895 master-0 kubenswrapper[7480]: E0308 22:09:51.860831 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-nk294_openshift-operator-controller(077643a2-ab2d-4f12-9abf-42a1c56d7aff)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" podUID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" Mar 08 22:09:52.505174 master-0 kubenswrapper[7480]: I0308 22:09:52.505049 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:09:52.505174 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:09:52.505174 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:09:52.505174 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:09:52.505576 master-0 kubenswrapper[7480]: I0308 22:09:52.505183 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:09:52.505576 master-0 kubenswrapper[7480]: I0308 22:09:52.505270 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:09:52.506229 master-0 kubenswrapper[7480]: I0308 22:09:52.506183 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a9ff593041cd55425d50bbaa4be87eabe25dc7300e7e43dd725623d6f81a484c"} pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" containerMessage="Container router failed startup probe, will be restarted" Mar 08 22:09:52.506304 master-0 kubenswrapper[7480]: I0308 22:09:52.506246 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" containerID="cri-o://a9ff593041cd55425d50bbaa4be87eabe25dc7300e7e43dd725623d6f81a484c" gracePeriod=3600 Mar 08 22:09:52.605418 master-0 kubenswrapper[7480]: I0308 22:09:52.605304 7480 status_manager.go:851] "Failed to get status for pod" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods router-default-79f8cd6fdd-4fsdl)" Mar 08 22:09:56.635554 master-0 kubenswrapper[7480]: I0308 22:09:56.635338 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:09:56.635554 master-0 kubenswrapper[7480]: I0308 22:09:56.635417 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:09:56.637147 master-0 kubenswrapper[7480]: I0308 22:09:56.635887 7480 scope.go:117] "RemoveContainer" containerID="852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e" Mar 08 22:09:56.889472 master-0 kubenswrapper[7480]: I0308 22:09:56.889253 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerStarted","Data":"caef745beab0d63a4013a6a6e99e9afcba1e4b4799e5753cb1368b115c97f35f"} Mar 08 22:09:56.890163 master-0 kubenswrapper[7480]: I0308 22:09:56.890105 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:09:56.892828 master-0 kubenswrapper[7480]: I0308 22:09:56.892745 7480 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-5ljhh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 08 22:09:56.892985 master-0 kubenswrapper[7480]: I0308 22:09:56.892859 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" podUID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 08 22:09:57.906754 master-0 kubenswrapper[7480]: I0308 22:09:57.905755 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:09:58.248206 master-0 kubenswrapper[7480]: E0308 22:09:58.247858 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189afd0b92a09f65 openshift-kube-controller-manager 11515 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:5bd68ed75dc57765fa56dbf42c892ba9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:09 +0000 UTC,LastTimestamp:2026-03-08 22:07:54.648785503 +0000 UTC m=+625.102406145,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:10:01.468214 master-0 kubenswrapper[7480]: E0308 22:10:01.468010 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:10:01.498370 master-0 kubenswrapper[7480]: I0308 22:10:01.498208 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:10:01.499957 master-0 kubenswrapper[7480]: I0308 22:10:01.499816 7480 scope.go:117] "RemoveContainer" containerID="bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb" Mar 08 22:10:01.859450 master-0 kubenswrapper[7480]: I0308 22:10:01.859385 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:10:01.860407 master-0 kubenswrapper[7480]: I0308 22:10:01.860372 7480 scope.go:117] "RemoveContainer" containerID="a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321" Mar 08 22:10:01.931887 master-0 kubenswrapper[7480]: I0308 22:10:01.931818 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/1.log" Mar 08 22:10:01.932556 master-0 kubenswrapper[7480]: I0308 22:10:01.932487 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"ccc6fcdc46611b9596f521b2881844d6f1a41b639a2e8a1e74a6dd4c88e74ea5"} Mar 08 22:10:01.932932 master-0 kubenswrapper[7480]: I0308 22:10:01.932881 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:10:02.947375 master-0 kubenswrapper[7480]: I0308 22:10:02.947288 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/1.log" Mar 08 22:10:02.949019 master-0 kubenswrapper[7480]: I0308 22:10:02.948949 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"0c8209add6ea0d058f261c8dd869620ca936a0ffd0bcfd90c4fa209b2d884ec7"} Mar 08 22:10:02.949934 master-0 kubenswrapper[7480]: I0308 22:10:02.949884 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:10:03.061312 master-0 kubenswrapper[7480]: E0308 22:10:03.055796 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:09:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:09:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:09:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:09:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-0)" Mar 08 22:10:05.606602 master-0 kubenswrapper[7480]: E0308 22:10:05.606422 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 22:10:05.978531 master-0 kubenswrapper[7480]: I0308 22:10:05.978446 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d96629c1f566486e43c8e0582e2c2eba47afa3a936c512881f234861d282525c"} Mar 08 22:10:05.979428 master-0 kubenswrapper[7480]: I0308 22:10:05.979374 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:10:05.979687 master-0 kubenswrapper[7480]: I0308 22:10:05.979647 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:10:05.982126 master-0 kubenswrapper[7480]: I0308 22:10:05.982030 7480 generic.go:334] "Generic (PLEG): container finished" podID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" exitCode=0 Mar 08 22:10:05.982389 master-0 kubenswrapper[7480]: I0308 22:10:05.982151 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerDied","Data":"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26"} Mar 08 22:10:05.982608 master-0 kubenswrapper[7480]: I0308 22:10:05.982575 7480 scope.go:117] "RemoveContainer" containerID="04d2e0520d46f0208b4f81730f6d539f9f11e470a035dc08dbf06867ed1a4e14" Mar 08 22:10:05.983438 master-0 kubenswrapper[7480]: I0308 22:10:05.983384 7480 scope.go:117] "RemoveContainer" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" Mar 08 22:10:05.983917 master-0 kubenswrapper[7480]: E0308 22:10:05.983856 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-f7df5f5b-txsrq_openshift-controller-manager(2395900a-ff6b-46ff-92c6-a8a1b5675b67)\"" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" Mar 08 22:10:05.986706 master-0 kubenswrapper[7480]: I0308 22:10:05.986639 7480 generic.go:334] "Generic (PLEG): container finished" podID="081acedd-4c88-461f-80f3-e80fdbadb725" containerID="b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5" exitCode=0 Mar 08 22:10:05.986706 master-0 kubenswrapper[7480]: I0308 22:10:05.986693 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerDied","Data":"b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5"} Mar 08 22:10:05.987411 master-0 kubenswrapper[7480]: I0308 22:10:05.987365 7480 scope.go:117] "RemoveContainer" containerID="b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5" Mar 08 22:10:05.987722 master-0 kubenswrapper[7480]: E0308 22:10:05.987684 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-66b55d57d-ngrjm_openshift-ovn-kubernetes(081acedd-4c88-461f-80f3-e80fdbadb725)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" podUID="081acedd-4c88-461f-80f3-e80fdbadb725" Mar 08 22:10:06.071470 master-0 kubenswrapper[7480]: I0308 22:10:06.071426 7480 scope.go:117] "RemoveContainer" containerID="aaa76f728d77c2984e519842ceb28a5273072cbb92bc05bafd70d63dc2b5a869" Mar 08 22:10:07.003893 master-0 kubenswrapper[7480]: I0308 22:10:07.003777 7480 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="d96629c1f566486e43c8e0582e2c2eba47afa3a936c512881f234861d282525c" exitCode=0 Mar 08 22:10:07.003893 master-0 kubenswrapper[7480]: I0308 22:10:07.003851 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"d96629c1f566486e43c8e0582e2c2eba47afa3a936c512881f234861d282525c"} Mar 08 22:10:11.502044 master-0 kubenswrapper[7480]: I0308 22:10:11.501979 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:10:11.862723 master-0 kubenswrapper[7480]: I0308 22:10:11.862472 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:10:13.056561 master-0 kubenswrapper[7480]: E0308 22:10:13.056480 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:10:13.992884 master-0 kubenswrapper[7480]: I0308 22:10:13.992807 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:10:13.992884 master-0 kubenswrapper[7480]: I0308 22:10:13.992906 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:10:13.993784 master-0 kubenswrapper[7480]: I0308 22:10:13.993732 7480 scope.go:117] "RemoveContainer" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" Mar 08 22:10:13.994147 master-0 kubenswrapper[7480]: E0308 22:10:13.994103 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-f7df5f5b-txsrq_openshift-controller-manager(2395900a-ff6b-46ff-92c6-a8a1b5675b67)\"" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" Mar 08 22:10:17.087682 master-0 kubenswrapper[7480]: I0308 22:10:17.087596 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/2.log" Mar 08 22:10:17.088672 master-0 kubenswrapper[7480]: I0308 22:10:17.088611 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/1.log" Mar 08 22:10:17.088760 master-0 kubenswrapper[7480]: I0308 22:10:17.088698 7480 generic.go:334] "Generic (PLEG): container finished" podID="c901b468-b8e9-48f8-8050-0d54e24e2adb" containerID="f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109" exitCode=1 Mar 08 22:10:17.088906 master-0 kubenswrapper[7480]: I0308 22:10:17.088782 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerDied","Data":"f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109"} Mar 08 22:10:17.088993 master-0 kubenswrapper[7480]: I0308 22:10:17.088955 7480 scope.go:117] "RemoveContainer" containerID="2bcf2f4522ec1e98454f0d3a88ae01a27705138b2f5fbbd08bc581f106c16a5d" Mar 08 22:10:17.089894 master-0 kubenswrapper[7480]: I0308 22:10:17.089822 7480 scope.go:117] "RemoveContainer" containerID="f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109" Mar 08 22:10:17.090371 master-0 kubenswrapper[7480]: E0308 22:10:17.090307 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:10:17.781862 master-0 kubenswrapper[7480]: I0308 22:10:17.781750 7480 scope.go:117] "RemoveContainer" containerID="b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5" Mar 08 22:10:18.102320 master-0 kubenswrapper[7480]: I0308 22:10:18.102223 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/2.log" Mar 08 22:10:18.106801 master-0 kubenswrapper[7480]: I0308 22:10:18.106722 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"76be1b9b9ad48798fd90927a0411e2ee8004152f03a23869518cd0c790a9c13f"} Mar 08 22:10:18.470012 master-0 kubenswrapper[7480]: E0308 22:10:18.469831 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:10:21.134255 master-0 kubenswrapper[7480]: I0308 22:10:21.134171 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/1.log" Mar 08 22:10:21.135765 master-0 kubenswrapper[7480]: I0308 22:10:21.135705 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/0.log" Mar 08 22:10:21.135903 master-0 kubenswrapper[7480]: I0308 22:10:21.135791 7480 generic.go:334] "Generic (PLEG): container finished" podID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" containerID="bcc6f26fb91d7fadf6887617bfb463e5c03667a9473c0563f69e191080e03b4a" exitCode=1 Mar 08 22:10:21.135977 master-0 kubenswrapper[7480]: I0308 22:10:21.135892 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerDied","Data":"bcc6f26fb91d7fadf6887617bfb463e5c03667a9473c0563f69e191080e03b4a"} Mar 08 22:10:21.135977 master-0 kubenswrapper[7480]: I0308 22:10:21.135951 7480 scope.go:117] "RemoveContainer" containerID="6edcb8198a1dd9b552f9d5577953c53700190a2b87b4307329abfdbc057033f6" Mar 08 22:10:21.137648 master-0 kubenswrapper[7480]: I0308 22:10:21.137505 7480 scope.go:117] "RemoveContainer" containerID="bcc6f26fb91d7fadf6887617bfb463e5c03667a9473c0563f69e191080e03b4a" Mar 08 22:10:21.138584 master-0 kubenswrapper[7480]: E0308 22:10:21.138457 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-xwmmm_openshift-machine-api(d9e9c931-9595-42f1-bbc2-c412286f6cd1)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" podUID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" Mar 08 22:10:21.139630 master-0 kubenswrapper[7480]: I0308 22:10:21.139581 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-stxvg_4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/machine-approver-controller/0.log" Mar 08 22:10:21.140188 master-0 kubenswrapper[7480]: I0308 22:10:21.140122 7480 generic.go:334] "Generic (PLEG): container finished" podID="4cbc6c17-7c16-435f-9399-b6f1ddb6d17f" containerID="4c252b52dc72b4cf9a688685e68fed111ec3680baa86d43719d7d70d42220e79" exitCode=255 Mar 08 22:10:21.140276 master-0 kubenswrapper[7480]: I0308 22:10:21.140186 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerDied","Data":"4c252b52dc72b4cf9a688685e68fed111ec3680baa86d43719d7d70d42220e79"} Mar 08 22:10:21.141377 master-0 kubenswrapper[7480]: I0308 22:10:21.141330 7480 scope.go:117] "RemoveContainer" containerID="4c252b52dc72b4cf9a688685e68fed111ec3680baa86d43719d7d70d42220e79" Mar 08 22:10:22.152795 master-0 kubenswrapper[7480]: I0308 22:10:22.152692 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-stxvg_4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/machine-approver-controller/0.log" Mar 08 22:10:22.153910 master-0 kubenswrapper[7480]: I0308 22:10:22.153689 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"7387a6e6e266fb7b7bd4761c192fb5472805d3bd3d892de94f1b2578384080b7"} Mar 08 22:10:22.159732 master-0 kubenswrapper[7480]: I0308 22:10:22.159675 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/1.log" Mar 08 22:10:23.056920 master-0 kubenswrapper[7480]: E0308 22:10:23.056846 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:10:24.183203 master-0 kubenswrapper[7480]: I0308 22:10:24.183046 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:10:24.184057 master-0 kubenswrapper[7480]: I0308 22:10:24.183221 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="7c8dd2936103822779238860b93c30ecc04ca409eda643b00bfa6d9998b13293" exitCode=0 Mar 08 22:10:24.184057 master-0 kubenswrapper[7480]: I0308 22:10:24.183283 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerDied","Data":"7c8dd2936103822779238860b93c30ecc04ca409eda643b00bfa6d9998b13293"} Mar 08 22:10:24.184386 master-0 kubenswrapper[7480]: I0308 22:10:24.184326 7480 scope.go:117] "RemoveContainer" containerID="7c8dd2936103822779238860b93c30ecc04ca409eda643b00bfa6d9998b13293" Mar 08 22:10:25.198433 master-0 kubenswrapper[7480]: I0308 22:10:25.198365 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/1.log" Mar 08 22:10:25.199933 master-0 kubenswrapper[7480]: I0308 22:10:25.199880 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/0.log" Mar 08 22:10:25.200052 master-0 kubenswrapper[7480]: I0308 22:10:25.199944 7480 generic.go:334] "Generic (PLEG): container finished" podID="6eb502a1-db10-46ba-b698-461919464fb9" containerID="91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2" exitCode=1 Mar 08 22:10:25.200052 master-0 kubenswrapper[7480]: I0308 22:10:25.200018 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerDied","Data":"91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2"} Mar 08 22:10:25.200224 master-0 kubenswrapper[7480]: I0308 22:10:25.200086 7480 scope.go:117] "RemoveContainer" containerID="8f7cb4c1d4399f77a4bee9272b7411e3d08f666e05ff23bad71da9a5b93158e4" Mar 08 22:10:25.200995 master-0 kubenswrapper[7480]: I0308 22:10:25.200909 7480 scope.go:117] "RemoveContainer" containerID="91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2" Mar 08 22:10:25.201547 master-0 kubenswrapper[7480]: E0308 22:10:25.201461 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-6686554ddc-c246n_openshift-machine-api(6eb502a1-db10-46ba-b698-461919464fb9)\"" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" podUID="6eb502a1-db10-46ba-b698-461919464fb9" Mar 08 22:10:25.206926 master-0 kubenswrapper[7480]: I0308 22:10:25.206867 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:10:25.207061 master-0 kubenswrapper[7480]: I0308 22:10:25.206962 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"f3e6830f6be965a1b71f9b3b1c36d3c6333b3e757c1527692bfbad8043cb5f84"} Mar 08 22:10:26.221736 master-0 kubenswrapper[7480]: I0308 22:10:26.221666 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/1.log" Mar 08 22:10:26.448871 master-0 kubenswrapper[7480]: I0308 22:10:26.448790 7480 scope.go:117] "RemoveContainer" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" Mar 08 22:10:27.237486 master-0 kubenswrapper[7480]: I0308 22:10:27.237405 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerStarted","Data":"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0"} Mar 08 22:10:27.238526 master-0 kubenswrapper[7480]: I0308 22:10:27.237829 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:10:27.245722 master-0 kubenswrapper[7480]: I0308 22:10:27.245615 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:10:29.433140 master-0 kubenswrapper[7480]: I0308 22:10:29.432929 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:10:29.433140 master-0 kubenswrapper[7480]: I0308 22:10:29.433066 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:10:29.434472 master-0 kubenswrapper[7480]: I0308 22:10:29.434367 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Mar 08 22:10:29.434643 master-0 kubenswrapper[7480]: I0308 22:10:29.434475 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 22:10:30.781942 master-0 kubenswrapper[7480]: I0308 22:10:30.781778 7480 scope.go:117] "RemoveContainer" containerID="f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109" Mar 08 22:10:30.782868 master-0 kubenswrapper[7480]: E0308 22:10:30.782248 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:10:32.252010 master-0 kubenswrapper[7480]: E0308 22:10:32.251788 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189afd0ba1e959ae openshift-kube-controller-manager 11516 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:5bd68ed75dc57765fa56dbf42c892ba9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:09 +0000 UTC,LastTimestamp:2026-03-08 22:07:54.922775603 +0000 UTC m=+625.376396215,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:10:33.057781 master-0 kubenswrapper[7480]: E0308 22:10:33.057662 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:10:35.471857 master-0 kubenswrapper[7480]: E0308 22:10:35.471740 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:10:36.781332 master-0 kubenswrapper[7480]: I0308 22:10:36.781268 7480 scope.go:117] "RemoveContainer" containerID="91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2" Mar 08 22:10:36.782423 master-0 kubenswrapper[7480]: I0308 22:10:36.782358 7480 scope.go:117] "RemoveContainer" containerID="bcc6f26fb91d7fadf6887617bfb463e5c03667a9473c0563f69e191080e03b4a" Mar 08 22:10:37.331887 master-0 kubenswrapper[7480]: I0308 22:10:37.331792 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/1.log" Mar 08 22:10:37.332236 master-0 kubenswrapper[7480]: I0308 22:10:37.331979 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerStarted","Data":"24b28697148b3cce0c10494ac1803deb5901b19d5b4c2913633b09d622b49222"} Mar 08 22:10:37.335196 master-0 kubenswrapper[7480]: I0308 22:10:37.335152 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/1.log" Mar 08 22:10:37.335897 master-0 kubenswrapper[7480]: I0308 22:10:37.335830 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e"} Mar 08 22:10:39.356690 master-0 kubenswrapper[7480]: I0308 22:10:39.356625 7480 generic.go:334] "Generic (PLEG): container finished" podID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerID="a9ff593041cd55425d50bbaa4be87eabe25dc7300e7e43dd725623d6f81a484c" exitCode=0 Mar 08 22:10:39.356690 master-0 kubenswrapper[7480]: I0308 22:10:39.356679 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerDied","Data":"a9ff593041cd55425d50bbaa4be87eabe25dc7300e7e43dd725623d6f81a484c"} Mar 08 22:10:39.357425 master-0 kubenswrapper[7480]: I0308 22:10:39.356726 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7"} Mar 08 22:10:39.357425 master-0 kubenswrapper[7480]: I0308 22:10:39.356749 7480 scope.go:117] "RemoveContainer" containerID="8e67a6a8195a1bf0907601fa19ffa597a648c56ee5160c3ec3e81c5ecf98df23" Mar 08 22:10:39.500737 master-0 kubenswrapper[7480]: I0308 22:10:39.500627 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:10:39.505135 master-0 kubenswrapper[7480]: I0308 22:10:39.505037 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:39.505135 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:39.505135 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:39.505135 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:39.505448 master-0 kubenswrapper[7480]: I0308 22:10:39.505168 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:39.983617 master-0 kubenswrapper[7480]: E0308 22:10:39.983526 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 22:10:40.370107 master-0 kubenswrapper[7480]: I0308 22:10:40.369990 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:10:40.370107 master-0 kubenswrapper[7480]: I0308 22:10:40.370063 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:10:40.503956 master-0 kubenswrapper[7480]: I0308 22:10:40.503841 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:40.503956 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:40.503956 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:40.503956 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:40.504411 master-0 kubenswrapper[7480]: I0308 22:10:40.503958 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:41.518107 master-0 kubenswrapper[7480]: I0308 22:10:41.517575 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:41.518107 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:41.518107 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:41.518107 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:41.519232 master-0 kubenswrapper[7480]: I0308 22:10:41.518122 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:42.433188 master-0 kubenswrapper[7480]: I0308 22:10:42.433056 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:10:42.433538 master-0 kubenswrapper[7480]: I0308 22:10:42.433198 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:10:42.503561 master-0 kubenswrapper[7480]: I0308 22:10:42.503498 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:42.503561 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:42.503561 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:42.503561 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:42.503987 master-0 kubenswrapper[7480]: I0308 22:10:42.503596 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:43.058221 master-0 kubenswrapper[7480]: E0308 22:10:43.058054 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:10:43.058221 master-0 kubenswrapper[7480]: E0308 22:10:43.058181 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:10:43.504464 master-0 kubenswrapper[7480]: I0308 22:10:43.504404 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:43.504464 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:43.504464 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:43.504464 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:43.505060 master-0 kubenswrapper[7480]: I0308 22:10:43.505017 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:43.782067 master-0 kubenswrapper[7480]: I0308 22:10:43.781778 7480 scope.go:117] "RemoveContainer" containerID="f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109" Mar 08 22:10:44.403264 master-0 kubenswrapper[7480]: I0308 22:10:44.403163 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/2.log" Mar 08 22:10:44.403264 master-0 kubenswrapper[7480]: I0308 22:10:44.403258 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb"} Mar 08 22:10:44.504999 master-0 kubenswrapper[7480]: I0308 22:10:44.504874 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:44.504999 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:44.504999 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:44.504999 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:44.505618 master-0 kubenswrapper[7480]: I0308 22:10:44.505016 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:45.501285 master-0 kubenswrapper[7480]: I0308 22:10:45.501215 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:10:45.503942 master-0 kubenswrapper[7480]: I0308 22:10:45.503878 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:45.503942 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:45.503942 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:45.503942 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:45.504350 master-0 kubenswrapper[7480]: I0308 22:10:45.504287 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:46.504230 master-0 kubenswrapper[7480]: I0308 22:10:46.504123 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:46.504230 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:46.504230 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:46.504230 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:46.504230 master-0 kubenswrapper[7480]: I0308 22:10:46.504229 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:47.505524 master-0 kubenswrapper[7480]: I0308 22:10:47.505408 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:47.505524 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:47.505524 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:47.505524 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:47.505524 master-0 kubenswrapper[7480]: I0308 22:10:47.505517 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:48.503493 master-0 kubenswrapper[7480]: I0308 22:10:48.503425 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:48.503493 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:48.503493 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:48.503493 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:48.503859 master-0 kubenswrapper[7480]: I0308 22:10:48.503502 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:49.504176 master-0 kubenswrapper[7480]: I0308 22:10:49.504032 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:49.504176 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:49.504176 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:49.504176 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:49.505706 master-0 kubenswrapper[7480]: I0308 22:10:49.504167 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:50.503387 master-0 kubenswrapper[7480]: I0308 22:10:50.503293 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:50.503387 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:50.503387 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:50.503387 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:50.503897 master-0 kubenswrapper[7480]: I0308 22:10:50.503395 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:51.503241 master-0 kubenswrapper[7480]: I0308 22:10:51.503128 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:51.503241 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:51.503241 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:51.503241 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:51.504341 master-0 kubenswrapper[7480]: I0308 22:10:51.503254 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:52.432437 master-0 kubenswrapper[7480]: I0308 22:10:52.432323 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:10:52.432832 master-0 kubenswrapper[7480]: I0308 22:10:52.432448 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 22:10:52.432832 master-0 kubenswrapper[7480]: I0308 22:10:52.432540 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:10:52.433645 master-0 kubenswrapper[7480]: I0308 22:10:52.433580 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f3e6830f6be965a1b71f9b3b1c36d3c6333b3e757c1527692bfbad8043cb5f84"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 08 22:10:52.433786 master-0 kubenswrapper[7480]: I0308 22:10:52.433754 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" containerID="cri-o://f3e6830f6be965a1b71f9b3b1c36d3c6333b3e757c1527692bfbad8043cb5f84" gracePeriod=30 Mar 08 22:10:52.473253 master-0 kubenswrapper[7480]: E0308 22:10:52.473065 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:10:52.504497 master-0 kubenswrapper[7480]: I0308 22:10:52.504370 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:52.504497 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:52.504497 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:52.504497 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:52.506284 master-0 kubenswrapper[7480]: I0308 22:10:52.504491 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:52.607707 master-0 kubenswrapper[7480]: I0308 22:10:52.607640 7480 status_manager.go:851] "Failed to get status for pod" podUID="a1a56802af72ce1aac6b5077f1695ac0" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Mar 08 22:10:53.488652 master-0 kubenswrapper[7480]: I0308 22:10:53.488578 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/1.log" Mar 08 22:10:53.492267 master-0 kubenswrapper[7480]: I0308 22:10:53.492216 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:10:53.492402 master-0 kubenswrapper[7480]: I0308 22:10:53.492314 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="f3e6830f6be965a1b71f9b3b1c36d3c6333b3e757c1527692bfbad8043cb5f84" exitCode=255 Mar 08 22:10:53.492402 master-0 kubenswrapper[7480]: I0308 22:10:53.492367 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerDied","Data":"f3e6830f6be965a1b71f9b3b1c36d3c6333b3e757c1527692bfbad8043cb5f84"} Mar 08 22:10:53.492549 master-0 kubenswrapper[7480]: I0308 22:10:53.492413 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"7876de4be365c9e5c092eb2901bb6e41c9485da6dea9f0a90861bb5179a92ed4"} Mar 08 22:10:53.492549 master-0 kubenswrapper[7480]: I0308 22:10:53.492444 7480 scope.go:117] "RemoveContainer" containerID="7c8dd2936103822779238860b93c30ecc04ca409eda643b00bfa6d9998b13293" Mar 08 22:10:53.503168 master-0 kubenswrapper[7480]: I0308 22:10:53.503060 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:53.503168 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:53.503168 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:53.503168 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:53.503581 master-0 kubenswrapper[7480]: I0308 22:10:53.503190 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:54.503766 master-0 kubenswrapper[7480]: I0308 22:10:54.503690 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:54.503766 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:54.503766 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:54.503766 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:54.504500 master-0 kubenswrapper[7480]: I0308 22:10:54.503786 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:54.508168 master-0 kubenswrapper[7480]: I0308 22:10:54.508049 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/1.log" Mar 08 22:10:54.512242 master-0 kubenswrapper[7480]: I0308 22:10:54.512173 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:10:55.503780 master-0 kubenswrapper[7480]: I0308 22:10:55.503646 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:55.503780 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:55.503780 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:55.503780 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:55.505062 master-0 kubenswrapper[7480]: I0308 22:10:55.503810 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:56.504932 master-0 kubenswrapper[7480]: I0308 22:10:56.504801 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:56.504932 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:56.504932 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:56.504932 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:56.506019 master-0 kubenswrapper[7480]: I0308 22:10:56.504939 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:57.503890 master-0 kubenswrapper[7480]: I0308 22:10:57.503774 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:57.503890 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:57.503890 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:57.503890 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:57.504340 master-0 kubenswrapper[7480]: I0308 22:10:57.503892 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:58.504513 master-0 kubenswrapper[7480]: I0308 22:10:58.504422 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:58.504513 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:58.504513 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:58.504513 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:58.505659 master-0 kubenswrapper[7480]: I0308 22:10:58.504523 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:10:59.432716 master-0 kubenswrapper[7480]: I0308 22:10:59.432649 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:10:59.433164 master-0 kubenswrapper[7480]: I0308 22:10:59.433138 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:10:59.506098 master-0 kubenswrapper[7480]: I0308 22:10:59.505982 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:10:59.506098 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:10:59.506098 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:10:59.506098 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:10:59.507176 master-0 kubenswrapper[7480]: I0308 22:10:59.506130 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:00.505043 master-0 kubenswrapper[7480]: I0308 22:11:00.504931 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:00.505043 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:00.505043 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:00.505043 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:00.505623 master-0 kubenswrapper[7480]: I0308 22:11:00.505110 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:01.503462 master-0 kubenswrapper[7480]: I0308 22:11:01.503371 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:01.503462 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:01.503462 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:01.503462 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:01.504768 master-0 kubenswrapper[7480]: I0308 22:11:01.503469 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:02.433358 master-0 kubenswrapper[7480]: I0308 22:11:02.433263 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:11:02.433729 master-0 kubenswrapper[7480]: I0308 22:11:02.433370 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:02.504279 master-0 kubenswrapper[7480]: I0308 22:11:02.504203 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:02.504279 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:02.504279 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:02.504279 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:02.505487 master-0 kubenswrapper[7480]: I0308 22:11:02.505351 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:03.350809 master-0 kubenswrapper[7480]: E0308 22:11:03.350496 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:10:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:10:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:10:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:10:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:03.505267 master-0 kubenswrapper[7480]: I0308 22:11:03.505190 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:03.505267 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:03.505267 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:03.505267 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:03.506567 master-0 kubenswrapper[7480]: I0308 22:11:03.506515 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:04.504279 master-0 kubenswrapper[7480]: I0308 22:11:04.504184 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:04.504279 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:04.504279 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:04.504279 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:04.504746 master-0 kubenswrapper[7480]: I0308 22:11:04.504306 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:05.503653 master-0 kubenswrapper[7480]: I0308 22:11:05.503550 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:05.503653 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:05.503653 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:05.503653 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:05.504744 master-0 kubenswrapper[7480]: I0308 22:11:05.503652 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:06.254982 master-0 kubenswrapper[7480]: E0308 22:11:06.254739 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189afd0ba3120a63 openshift-kube-controller-manager 11517 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:5bd68ed75dc57765fa56dbf42c892ba9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:09 +0000 UTC,LastTimestamp:2026-03-08 22:07:54.93557904 +0000 UTC m=+625.389199632,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:11:06.503779 master-0 kubenswrapper[7480]: I0308 22:11:06.503664 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:06.503779 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:06.503779 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:06.503779 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:06.503779 master-0 kubenswrapper[7480]: I0308 22:11:06.503752 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:07.503863 master-0 kubenswrapper[7480]: I0308 22:11:07.503734 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:07.503863 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:07.503863 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:07.503863 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:07.505174 master-0 kubenswrapper[7480]: I0308 22:11:07.503872 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:08.504168 master-0 kubenswrapper[7480]: I0308 22:11:08.504060 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:08.504168 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:08.504168 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:08.504168 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:08.505187 master-0 kubenswrapper[7480]: I0308 22:11:08.504199 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:09.474535 master-0 kubenswrapper[7480]: E0308 22:11:09.474412 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:11:09.505414 master-0 kubenswrapper[7480]: I0308 22:11:09.505311 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:09.505414 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:09.505414 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:09.505414 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:09.506197 master-0 kubenswrapper[7480]: I0308 22:11:09.505473 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:10.517207 master-0 kubenswrapper[7480]: I0308 22:11:10.517052 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:10.517207 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:10.517207 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:10.517207 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:10.518123 master-0 kubenswrapper[7480]: I0308 22:11:10.517218 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:11.503839 master-0 kubenswrapper[7480]: I0308 22:11:11.503771 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:11.503839 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:11.503839 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:11.503839 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:11.504929 master-0 kubenswrapper[7480]: I0308 22:11:11.504873 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:12.432381 master-0 kubenswrapper[7480]: I0308 22:11:12.432289 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:11:12.433170 master-0 kubenswrapper[7480]: I0308 22:11:12.432392 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:12.503899 master-0 kubenswrapper[7480]: I0308 22:11:12.503845 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:12.503899 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:12.503899 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:12.503899 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:12.503899 master-0 kubenswrapper[7480]: I0308 22:11:12.503888 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:13.352436 master-0 kubenswrapper[7480]: E0308 22:11:13.351684 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:13.504809 master-0 kubenswrapper[7480]: I0308 22:11:13.504692 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:13.504809 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:13.504809 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:13.504809 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:13.506029 master-0 kubenswrapper[7480]: I0308 22:11:13.504817 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:14.374136 master-0 kubenswrapper[7480]: E0308 22:11:14.373961 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:14.503881 master-0 kubenswrapper[7480]: I0308 22:11:14.503789 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:14.503881 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:14.503881 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:14.503881 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:14.504466 master-0 kubenswrapper[7480]: I0308 22:11:14.503919 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:14.710116 master-0 kubenswrapper[7480]: I0308 22:11:14.710029 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/3.log" Mar 08 22:11:14.710900 master-0 kubenswrapper[7480]: I0308 22:11:14.710867 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/2.log" Mar 08 22:11:14.711353 master-0 kubenswrapper[7480]: I0308 22:11:14.710951 7480 generic.go:334] "Generic (PLEG): container finished" podID="c901b468-b8e9-48f8-8050-0d54e24e2adb" containerID="d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb" exitCode=1 Mar 08 22:11:14.711353 master-0 kubenswrapper[7480]: I0308 22:11:14.711062 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerDied","Data":"d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb"} Mar 08 22:11:14.711353 master-0 kubenswrapper[7480]: I0308 22:11:14.711163 7480 scope.go:117] "RemoveContainer" containerID="f2f2c209c9dc032368727672b5a81d06749c220790791a47ead234478e14b109" Mar 08 22:11:14.712187 master-0 kubenswrapper[7480]: I0308 22:11:14.712024 7480 scope.go:117] "RemoveContainer" containerID="d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb" Mar 08 22:11:14.712620 master-0 kubenswrapper[7480]: E0308 22:11:14.712560 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:11:15.504359 master-0 kubenswrapper[7480]: I0308 22:11:15.504214 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:15.504359 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:15.504359 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:15.504359 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:15.504825 master-0 kubenswrapper[7480]: I0308 22:11:15.504423 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:15.734429 master-0 kubenswrapper[7480]: I0308 22:11:15.734366 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"41ff2358902de9820af0e57b5654a5dd5662e57ab1942e9aa3f97784ba7580d9"} Mar 08 22:11:15.735030 master-0 kubenswrapper[7480]: I0308 22:11:15.734437 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"896ca1240864c042686b8d27bbaf6b98e7018c7035e4ce4b54e7fc7e2545eda3"} Mar 08 22:11:15.735030 master-0 kubenswrapper[7480]: I0308 22:11:15.734452 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5c13cc724ceb8a47022a4b506a02a4ffa2182349375d59b27a103a3a379a347a"} Mar 08 22:11:15.736969 master-0 kubenswrapper[7480]: I0308 22:11:15.736923 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/3.log" Mar 08 22:11:16.504698 master-0 kubenswrapper[7480]: I0308 22:11:16.504594 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:16.504698 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:16.504698 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:16.504698 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:16.505274 master-0 kubenswrapper[7480]: I0308 22:11:16.504709 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:16.756759 master-0 kubenswrapper[7480]: I0308 22:11:16.756489 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"4f22802100112023432a8b6ca7c77bb2fc7239f09a3e7d345080a8cf8e397b1e"} Mar 08 22:11:16.756759 master-0 kubenswrapper[7480]: I0308 22:11:16.756606 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"94d3b7e3742a7d28fa13f4530eb256cdd591ddfdf571150f5be4ed1fc2b06bd6"} Mar 08 22:11:16.757862 master-0 kubenswrapper[7480]: I0308 22:11:16.756981 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:11:16.757862 master-0 kubenswrapper[7480]: I0308 22:11:16.757033 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:11:17.504852 master-0 kubenswrapper[7480]: I0308 22:11:17.504643 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:17.504852 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:17.504852 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:17.504852 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:17.504852 master-0 kubenswrapper[7480]: I0308 22:11:17.504768 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:18.505836 master-0 kubenswrapper[7480]: I0308 22:11:18.505707 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:18.505836 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:18.505836 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:18.505836 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:18.507252 master-0 kubenswrapper[7480]: I0308 22:11:18.505850 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:19.504979 master-0 kubenswrapper[7480]: I0308 22:11:19.504873 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:19.504979 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:19.504979 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:19.504979 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:19.505501 master-0 kubenswrapper[7480]: I0308 22:11:19.504994 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:20.503959 master-0 kubenswrapper[7480]: I0308 22:11:20.503868 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:20.503959 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:20.503959 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:20.503959 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:20.504815 master-0 kubenswrapper[7480]: I0308 22:11:20.503972 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:20.820982 master-0 kubenswrapper[7480]: I0308 22:11:20.820911 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:21.504053 master-0 kubenswrapper[7480]: I0308 22:11:21.503941 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:21.504053 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:21.504053 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:21.504053 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:21.504053 master-0 kubenswrapper[7480]: I0308 22:11:21.504012 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:22.433457 master-0 kubenswrapper[7480]: I0308 22:11:22.433378 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:11:22.433894 master-0 kubenswrapper[7480]: I0308 22:11:22.433857 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:22.434031 master-0 kubenswrapper[7480]: I0308 22:11:22.434012 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:11:22.435153 master-0 kubenswrapper[7480]: I0308 22:11:22.435124 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7876de4be365c9e5c092eb2901bb6e41c9485da6dea9f0a90861bb5179a92ed4"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 08 22:11:22.435366 master-0 kubenswrapper[7480]: I0308 22:11:22.435342 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" containerID="cri-o://7876de4be365c9e5c092eb2901bb6e41c9485da6dea9f0a90861bb5179a92ed4" gracePeriod=30 Mar 08 22:11:22.504100 master-0 kubenswrapper[7480]: I0308 22:11:22.504013 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:22.504100 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:22.504100 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:22.504100 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:22.504706 master-0 kubenswrapper[7480]: I0308 22:11:22.504179 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:22.811766 master-0 kubenswrapper[7480]: I0308 22:11:22.811716 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/2.log" Mar 08 22:11:22.812878 master-0 kubenswrapper[7480]: I0308 22:11:22.812821 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/1.log" Mar 08 22:11:22.814815 master-0 kubenswrapper[7480]: I0308 22:11:22.814776 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:11:22.814950 master-0 kubenswrapper[7480]: I0308 22:11:22.814833 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="7876de4be365c9e5c092eb2901bb6e41c9485da6dea9f0a90861bb5179a92ed4" exitCode=255 Mar 08 22:11:22.814950 master-0 kubenswrapper[7480]: I0308 22:11:22.814876 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerDied","Data":"7876de4be365c9e5c092eb2901bb6e41c9485da6dea9f0a90861bb5179a92ed4"} Mar 08 22:11:22.814950 master-0 kubenswrapper[7480]: I0308 22:11:22.814922 7480 scope.go:117] "RemoveContainer" containerID="f3e6830f6be965a1b71f9b3b1c36d3c6333b3e757c1527692bfbad8043cb5f84" Mar 08 22:11:23.352782 master-0 kubenswrapper[7480]: E0308 22:11:23.352580 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 08 22:11:23.505106 master-0 kubenswrapper[7480]: I0308 22:11:23.504991 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:23.505106 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:23.505106 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:23.505106 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:23.506287 master-0 kubenswrapper[7480]: I0308 22:11:23.505141 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:23.833194 master-0 kubenswrapper[7480]: I0308 22:11:23.833117 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/2.log" Mar 08 22:11:23.837929 master-0 kubenswrapper[7480]: I0308 22:11:23.837856 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:11:23.838099 master-0 kubenswrapper[7480]: I0308 22:11:23.837961 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} Mar 08 22:11:24.504983 master-0 kubenswrapper[7480]: I0308 22:11:24.504864 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:24.504983 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:24.504983 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:24.504983 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:24.506435 master-0 kubenswrapper[7480]: I0308 22:11:24.505006 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:25.504499 master-0 kubenswrapper[7480]: I0308 22:11:25.504413 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:25.504499 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:25.504499 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:25.504499 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:25.504499 master-0 kubenswrapper[7480]: I0308 22:11:25.504523 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:25.821464 master-0 kubenswrapper[7480]: I0308 22:11:25.821364 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:25.863992 master-0 kubenswrapper[7480]: I0308 22:11:25.863906 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:26.475428 master-0 kubenswrapper[7480]: E0308 22:11:26.475310 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 08 22:11:26.505835 master-0 kubenswrapper[7480]: I0308 22:11:26.505728 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:26.505835 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:26.505835 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:26.505835 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:26.505835 master-0 kubenswrapper[7480]: I0308 22:11:26.505818 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:27.505146 master-0 kubenswrapper[7480]: I0308 22:11:27.504972 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:27.505146 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:27.505146 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:27.505146 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:27.505146 master-0 kubenswrapper[7480]: I0308 22:11:27.505105 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:27.781863 master-0 kubenswrapper[7480]: I0308 22:11:27.781650 7480 scope.go:117] "RemoveContainer" containerID="d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb" Mar 08 22:11:27.782239 master-0 kubenswrapper[7480]: E0308 22:11:27.782034 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:11:28.503898 master-0 kubenswrapper[7480]: I0308 22:11:28.503735 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:28.503898 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:28.503898 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:28.503898 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:28.503898 master-0 kubenswrapper[7480]: I0308 22:11:28.503878 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:29.432951 master-0 kubenswrapper[7480]: I0308 22:11:29.432840 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:11:29.432951 master-0 kubenswrapper[7480]: I0308 22:11:29.432946 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:11:29.504459 master-0 kubenswrapper[7480]: I0308 22:11:29.504322 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:29.504459 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:29.504459 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:29.504459 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:29.504990 master-0 kubenswrapper[7480]: I0308 22:11:29.504473 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:30.536278 master-0 kubenswrapper[7480]: I0308 22:11:30.536151 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:30.536278 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:30.536278 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:30.536278 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:30.537618 master-0 kubenswrapper[7480]: I0308 22:11:30.536284 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:30.857599 master-0 kubenswrapper[7480]: I0308 22:11:30.857508 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:31.503327 master-0 kubenswrapper[7480]: I0308 22:11:31.503228 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:31.503327 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:31.503327 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:31.503327 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:31.503327 master-0 kubenswrapper[7480]: I0308 22:11:31.503301 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:32.433739 master-0 kubenswrapper[7480]: I0308 22:11:32.433587 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:11:32.433739 master-0 kubenswrapper[7480]: I0308 22:11:32.433724 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:32.508504 master-0 kubenswrapper[7480]: I0308 22:11:32.508350 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:32.508504 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:32.508504 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:32.508504 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:32.508504 master-0 kubenswrapper[7480]: I0308 22:11:32.508469 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:33.354051 master-0 kubenswrapper[7480]: E0308 22:11:33.353909 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:33.503279 master-0 kubenswrapper[7480]: I0308 22:11:33.503190 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:33.503279 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:33.503279 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:33.503279 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:33.503279 master-0 kubenswrapper[7480]: I0308 22:11:33.503276 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:34.504240 master-0 kubenswrapper[7480]: I0308 22:11:34.504126 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:34.504240 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:34.504240 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:34.504240 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:34.505219 master-0 kubenswrapper[7480]: I0308 22:11:34.504275 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:35.504940 master-0 kubenswrapper[7480]: I0308 22:11:35.504801 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:35.504940 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:35.504940 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:35.504940 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:35.506257 master-0 kubenswrapper[7480]: I0308 22:11:35.504987 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:36.504430 master-0 kubenswrapper[7480]: I0308 22:11:36.504309 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:36.504430 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:36.504430 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:36.504430 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:36.506248 master-0 kubenswrapper[7480]: I0308 22:11:36.504447 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:37.503572 master-0 kubenswrapper[7480]: I0308 22:11:37.503366 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:37.503572 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:37.503572 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:37.503572 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:37.503572 master-0 kubenswrapper[7480]: I0308 22:11:37.503464 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:37.969569 master-0 kubenswrapper[7480]: I0308 22:11:37.969496 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/2.log" Mar 08 22:11:37.970506 master-0 kubenswrapper[7480]: I0308 22:11:37.970448 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/1.log" Mar 08 22:11:37.970947 master-0 kubenswrapper[7480]: I0308 22:11:37.970892 7480 generic.go:334] "Generic (PLEG): container finished" podID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" containerID="f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e" exitCode=1 Mar 08 22:11:37.971020 master-0 kubenswrapper[7480]: I0308 22:11:37.970954 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerDied","Data":"f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e"} Mar 08 22:11:37.971020 master-0 kubenswrapper[7480]: I0308 22:11:37.971004 7480 scope.go:117] "RemoveContainer" containerID="bcc6f26fb91d7fadf6887617bfb463e5c03667a9473c0563f69e191080e03b4a" Mar 08 22:11:37.972032 master-0 kubenswrapper[7480]: I0308 22:11:37.971961 7480 scope.go:117] "RemoveContainer" containerID="f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e" Mar 08 22:11:37.972646 master-0 kubenswrapper[7480]: E0308 22:11:37.972552 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-xwmmm_openshift-machine-api(d9e9c931-9595-42f1-bbc2-c412286f6cd1)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" podUID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" Mar 08 22:11:38.504432 master-0 kubenswrapper[7480]: I0308 22:11:38.504358 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:38.504432 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:38.504432 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:38.504432 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:38.505138 master-0 kubenswrapper[7480]: I0308 22:11:38.505090 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:38.983313 master-0 kubenswrapper[7480]: I0308 22:11:38.983223 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/2.log" Mar 08 22:11:39.504192 master-0 kubenswrapper[7480]: I0308 22:11:39.504103 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:39.504192 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:39.504192 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:39.504192 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:39.504787 master-0 kubenswrapper[7480]: I0308 22:11:39.504201 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:40.260184 master-0 kubenswrapper[7480]: E0308 22:11:40.259891 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-xlrwk.189afd12478436e3 openshift-multus 11896 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-xlrwk,UID:7147d808-f9a2-434c-ae54-77d82a3d2e1f,APIVersion:v1,ResourceVersion:11672,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:38 +0000 UTC,LastTimestamp:2026-03-08 22:07:58.2746145 +0000 UTC m=+628.728235102,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:11:40.504847 master-0 kubenswrapper[7480]: I0308 22:11:40.504743 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:40.504847 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:40.504847 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:40.504847 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:40.505524 master-0 kubenswrapper[7480]: I0308 22:11:40.504885 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:41.504396 master-0 kubenswrapper[7480]: I0308 22:11:41.504307 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:41.504396 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:41.504396 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:41.504396 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:41.505454 master-0 kubenswrapper[7480]: I0308 22:11:41.505412 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:41.781782 master-0 kubenswrapper[7480]: I0308 22:11:41.781607 7480 scope.go:117] "RemoveContainer" containerID="d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb" Mar 08 22:11:41.782505 master-0 kubenswrapper[7480]: E0308 22:11:41.782457 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:11:42.433191 master-0 kubenswrapper[7480]: I0308 22:11:42.432949 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:11:42.433191 master-0 kubenswrapper[7480]: I0308 22:11:42.433045 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:42.503457 master-0 kubenswrapper[7480]: I0308 22:11:42.503259 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:42.503457 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:42.503457 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:42.503457 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:42.503457 master-0 kubenswrapper[7480]: I0308 22:11:42.503373 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:43.028568 master-0 kubenswrapper[7480]: I0308 22:11:43.028445 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/4.log" Mar 08 22:11:43.029550 master-0 kubenswrapper[7480]: I0308 22:11:43.029385 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/3.log" Mar 08 22:11:43.030476 master-0 kubenswrapper[7480]: I0308 22:11:43.030386 7480 generic.go:334] "Generic (PLEG): container finished" podID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" exitCode=1 Mar 08 22:11:43.030666 master-0 kubenswrapper[7480]: I0308 22:11:43.030464 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerDied","Data":"31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18"} Mar 08 22:11:43.030666 master-0 kubenswrapper[7480]: I0308 22:11:43.030557 7480 scope.go:117] "RemoveContainer" containerID="11be5746bd3e725240b9d330f64ada9a50979ab4691f07ea934a8eda8d86e8b5" Mar 08 22:11:43.031480 master-0 kubenswrapper[7480]: I0308 22:11:43.031405 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:11:43.031983 master-0 kubenswrapper[7480]: E0308 22:11:43.031925 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:11:43.355135 master-0 kubenswrapper[7480]: E0308 22:11:43.354986 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:43.355135 master-0 kubenswrapper[7480]: E0308 22:11:43.355093 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:11:43.477318 master-0 kubenswrapper[7480]: E0308 22:11:43.477222 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:11:43.504026 master-0 kubenswrapper[7480]: I0308 22:11:43.503927 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:43.504026 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:43.504026 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:43.504026 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:43.504366 master-0 kubenswrapper[7480]: I0308 22:11:43.504102 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:44.042428 master-0 kubenswrapper[7480]: I0308 22:11:44.042323 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/4.log" Mar 08 22:11:44.504120 master-0 kubenswrapper[7480]: I0308 22:11:44.503946 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:44.504120 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:44.504120 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:44.504120 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:44.504664 master-0 kubenswrapper[7480]: I0308 22:11:44.504168 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:45.551285 master-0 kubenswrapper[7480]: I0308 22:11:45.551180 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:45.551285 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:45.551285 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:45.551285 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:45.552174 master-0 kubenswrapper[7480]: I0308 22:11:45.551329 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:46.504182 master-0 kubenswrapper[7480]: I0308 22:11:46.504033 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:46.504182 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:46.504182 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:46.504182 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:46.504889 master-0 kubenswrapper[7480]: I0308 22:11:46.504221 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:47.503813 master-0 kubenswrapper[7480]: I0308 22:11:47.503693 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:47.503813 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:47.503813 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:47.503813 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:47.505007 master-0 kubenswrapper[7480]: I0308 22:11:47.503827 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:48.504611 master-0 kubenswrapper[7480]: I0308 22:11:48.504463 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:48.504611 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:48.504611 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:48.504611 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:48.504611 master-0 kubenswrapper[7480]: I0308 22:11:48.504575 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:49.503366 master-0 kubenswrapper[7480]: I0308 22:11:49.503260 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:49.503366 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:49.503366 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:49.503366 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:49.503366 master-0 kubenswrapper[7480]: I0308 22:11:49.503349 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:50.504464 master-0 kubenswrapper[7480]: I0308 22:11:50.504338 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:50.504464 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:50.504464 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:50.504464 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:50.505297 master-0 kubenswrapper[7480]: I0308 22:11:50.504501 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:50.760835 master-0 kubenswrapper[7480]: E0308 22:11:50.760634 7480 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:50.782302 master-0 kubenswrapper[7480]: I0308 22:11:50.782212 7480 scope.go:117] "RemoveContainer" containerID="f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e" Mar 08 22:11:50.782815 master-0 kubenswrapper[7480]: E0308 22:11:50.782751 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-xwmmm_openshift-machine-api(d9e9c931-9595-42f1-bbc2-c412286f6cd1)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" podUID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" Mar 08 22:11:51.095066 master-0 kubenswrapper[7480]: I0308 22:11:51.094982 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:11:51.095066 master-0 kubenswrapper[7480]: I0308 22:11:51.095035 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:11:51.503761 master-0 kubenswrapper[7480]: I0308 22:11:51.503586 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:51.503761 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:51.503761 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:51.503761 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:51.503761 master-0 kubenswrapper[7480]: I0308 22:11:51.503674 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:52.432739 master-0 kubenswrapper[7480]: I0308 22:11:52.432611 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:11:52.432739 master-0 kubenswrapper[7480]: I0308 22:11:52.432720 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:11:52.434140 master-0 kubenswrapper[7480]: I0308 22:11:52.432817 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:11:52.434140 master-0 kubenswrapper[7480]: I0308 22:11:52.433852 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 08 22:11:52.434140 master-0 kubenswrapper[7480]: I0308 22:11:52.434046 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" containerID="cri-o://fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" gracePeriod=30 Mar 08 22:11:52.504334 master-0 kubenswrapper[7480]: I0308 22:11:52.504229 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:52.504334 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:52.504334 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:52.504334 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:52.504676 master-0 kubenswrapper[7480]: I0308 22:11:52.504344 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:52.557941 master-0 kubenswrapper[7480]: E0308 22:11:52.557831 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(5bd68ed75dc57765fa56dbf42c892ba9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" Mar 08 22:11:52.610477 master-0 kubenswrapper[7480]: I0308 22:11:52.610321 7480 status_manager.go:851] "Failed to get status for pod" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" pod="openshift-multus/cni-sysctl-allowlist-ds-xlrwk" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cni-sysctl-allowlist-ds-xlrwk)" Mar 08 22:11:53.115481 master-0 kubenswrapper[7480]: I0308 22:11:53.115386 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/3.log" Mar 08 22:11:53.116321 master-0 kubenswrapper[7480]: I0308 22:11:53.116241 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/2.log" Mar 08 22:11:53.119001 master-0 kubenswrapper[7480]: I0308 22:11:53.118951 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:11:53.119163 master-0 kubenswrapper[7480]: I0308 22:11:53.119013 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" exitCode=255 Mar 08 22:11:53.119163 master-0 kubenswrapper[7480]: I0308 22:11:53.119058 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerDied","Data":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} Mar 08 22:11:53.119322 master-0 kubenswrapper[7480]: I0308 22:11:53.119141 7480 scope.go:117] "RemoveContainer" containerID="7876de4be365c9e5c092eb2901bb6e41c9485da6dea9f0a90861bb5179a92ed4" Mar 08 22:11:53.119892 master-0 kubenswrapper[7480]: I0308 22:11:53.119857 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:11:53.120253 master-0 kubenswrapper[7480]: E0308 22:11:53.120203 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(5bd68ed75dc57765fa56dbf42c892ba9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" Mar 08 22:11:53.503779 master-0 kubenswrapper[7480]: I0308 22:11:53.503454 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:53.503779 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:53.503779 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:53.503779 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:53.505137 master-0 kubenswrapper[7480]: I0308 22:11:53.503807 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:54.139680 master-0 kubenswrapper[7480]: I0308 22:11:54.139551 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/3.log" Mar 08 22:11:54.143048 master-0 kubenswrapper[7480]: I0308 22:11:54.142978 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:11:54.503889 master-0 kubenswrapper[7480]: I0308 22:11:54.503695 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:54.503889 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:54.503889 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:54.503889 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:54.503889 master-0 kubenswrapper[7480]: I0308 22:11:54.503791 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:54.782333 master-0 kubenswrapper[7480]: I0308 22:11:54.782137 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:11:54.782607 master-0 kubenswrapper[7480]: E0308 22:11:54.782552 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:11:55.503994 master-0 kubenswrapper[7480]: I0308 22:11:55.503875 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:55.503994 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:55.503994 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:55.503994 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:55.505024 master-0 kubenswrapper[7480]: I0308 22:11:55.503975 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:55.781210 master-0 kubenswrapper[7480]: I0308 22:11:55.781004 7480 scope.go:117] "RemoveContainer" containerID="d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb" Mar 08 22:11:56.167812 master-0 kubenswrapper[7480]: I0308 22:11:56.167726 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/3.log" Mar 08 22:11:56.168186 master-0 kubenswrapper[7480]: I0308 22:11:56.167817 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4"} Mar 08 22:11:56.503449 master-0 kubenswrapper[7480]: I0308 22:11:56.503272 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:56.503449 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:56.503449 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:56.503449 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:56.503449 master-0 kubenswrapper[7480]: I0308 22:11:56.503390 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:57.151218 master-0 kubenswrapper[7480]: I0308 22:11:57.150172 7480 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 08 22:11:57.159752 master-0 kubenswrapper[7480]: I0308 22:11:57.155939 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=260.155909965 podStartE2EDuration="4m20.155909965s" podCreationTimestamp="2026-03-08 22:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:07:39.493453729 +0000 UTC m=+609.947074381" watchObservedRunningTime="2026-03-08 22:11:57.155909965 +0000 UTC m=+867.609530637" Mar 08 22:11:57.160378 master-0 kubenswrapper[7480]: I0308 22:11:57.160332 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 22:11:57.179730 master-0 kubenswrapper[7480]: I0308 22:11:57.179592 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 22:11:57.214212 master-0 kubenswrapper[7480]: I0308 22:11:57.213239 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddw98"] Mar 08 22:11:57.220927 master-0 kubenswrapper[7480]: I0308 22:11:57.220764 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddw98"] Mar 08 22:11:57.505373 master-0 kubenswrapper[7480]: I0308 22:11:57.505177 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:57.505373 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:57.505373 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:57.505373 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:57.505373 master-0 kubenswrapper[7480]: I0308 22:11:57.505265 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:57.739261 master-0 kubenswrapper[7480]: I0308 22:11:57.739171 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xlrwk"] Mar 08 22:11:57.745954 master-0 kubenswrapper[7480]: I0308 22:11:57.745914 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xlrwk"] Mar 08 22:11:57.796874 master-0 kubenswrapper[7480]: I0308 22:11:57.796559 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" path="/var/lib/kubelet/pods/1dfc8afd-2330-46a4-ae5b-36522102b332/volumes" Mar 08 22:11:57.802803 master-0 kubenswrapper[7480]: I0308 22:11:57.799880 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" path="/var/lib/kubelet/pods/7147d808-f9a2-434c-ae54-77d82a3d2e1f/volumes" Mar 08 22:11:58.504858 master-0 kubenswrapper[7480]: I0308 22:11:58.504674 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:58.504858 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:58.504858 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:58.504858 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:58.505873 master-0 kubenswrapper[7480]: I0308 22:11:58.504929 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:11:59.432746 master-0 kubenswrapper[7480]: I0308 22:11:59.432647 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:11:59.434127 master-0 kubenswrapper[7480]: I0308 22:11:59.434014 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:11:59.434696 master-0 kubenswrapper[7480]: E0308 22:11:59.434633 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(5bd68ed75dc57765fa56dbf42c892ba9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" Mar 08 22:11:59.503692 master-0 kubenswrapper[7480]: I0308 22:11:59.503627 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:11:59.503692 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:11:59.503692 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:11:59.503692 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:11:59.504144 master-0 kubenswrapper[7480]: I0308 22:11:59.503722 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:00.478519 master-0 kubenswrapper[7480]: E0308 22:12:00.478403 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:12:00.504197 master-0 kubenswrapper[7480]: I0308 22:12:00.504100 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:00.504197 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:00.504197 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:00.504197 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:00.504630 master-0 kubenswrapper[7480]: I0308 22:12:00.504227 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:01.504140 master-0 kubenswrapper[7480]: I0308 22:12:01.503996 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:01.504140 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:01.504140 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:01.504140 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:01.504140 master-0 kubenswrapper[7480]: I0308 22:12:01.504129 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:02.505298 master-0 kubenswrapper[7480]: I0308 22:12:02.505194 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:02.505298 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:02.505298 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:02.505298 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:02.506665 master-0 kubenswrapper[7480]: I0308 22:12:02.505340 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:02.781725 master-0 kubenswrapper[7480]: I0308 22:12:02.781503 7480 scope.go:117] "RemoveContainer" containerID="f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e" Mar 08 22:12:03.232939 master-0 kubenswrapper[7480]: I0308 22:12:03.232870 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/2.log" Mar 08 22:12:03.233526 master-0 kubenswrapper[7480]: I0308 22:12:03.233470 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"ec7250269822a93c50f1982f4d31a397949dd9bb5b4f057769f6310cd009ff62"} Mar 08 22:12:03.504538 master-0 kubenswrapper[7480]: I0308 22:12:03.504354 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:03.504538 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:03.504538 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:03.504538 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:03.504538 master-0 kubenswrapper[7480]: I0308 22:12:03.504493 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:03.607933 master-0 kubenswrapper[7480]: E0308 22:12:03.607634 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:11:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:11:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:11:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:11:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:04.503793 master-0 kubenswrapper[7480]: I0308 22:12:04.503699 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:04.503793 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:04.503793 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:04.503793 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:04.503793 master-0 kubenswrapper[7480]: I0308 22:12:04.503790 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:05.503900 master-0 kubenswrapper[7480]: I0308 22:12:05.503825 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:05.503900 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:05.503900 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:05.503900 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:05.503900 master-0 kubenswrapper[7480]: I0308 22:12:05.503900 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:05.781245 master-0 kubenswrapper[7480]: I0308 22:12:05.781040 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:12:05.781541 master-0 kubenswrapper[7480]: E0308 22:12:05.781474 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:12:06.503594 master-0 kubenswrapper[7480]: I0308 22:12:06.503444 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:06.503594 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:06.503594 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:06.503594 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:06.503594 master-0 kubenswrapper[7480]: I0308 22:12:06.503584 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:07.504118 master-0 kubenswrapper[7480]: I0308 22:12:07.503885 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:07.504118 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:07.504118 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:07.504118 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:07.504118 master-0 kubenswrapper[7480]: I0308 22:12:07.504097 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:08.503800 master-0 kubenswrapper[7480]: I0308 22:12:08.503713 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:08.503800 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:08.503800 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:08.503800 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:08.504174 master-0 kubenswrapper[7480]: I0308 22:12:08.503820 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:09.504403 master-0 kubenswrapper[7480]: I0308 22:12:09.504327 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:09.504403 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:09.504403 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:09.504403 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:09.505567 master-0 kubenswrapper[7480]: I0308 22:12:09.505509 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:10.163432 master-0 kubenswrapper[7480]: E0308 22:12:10.163341 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 22:12:10.299093 master-0 kubenswrapper[7480]: I0308 22:12:10.299015 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:12:10.299093 master-0 kubenswrapper[7480]: I0308 22:12:10.299061 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="5b9fb57f-3c02-459c-97cf-261a396cc93f" Mar 08 22:12:10.504177 master-0 kubenswrapper[7480]: I0308 22:12:10.504065 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:10.504177 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:10.504177 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:10.504177 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:10.504965 master-0 kubenswrapper[7480]: I0308 22:12:10.504540 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:11.503603 master-0 kubenswrapper[7480]: I0308 22:12:11.503515 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:11.503603 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:11.503603 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:11.503603 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:11.504137 master-0 kubenswrapper[7480]: I0308 22:12:11.503612 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:12.504167 master-0 kubenswrapper[7480]: I0308 22:12:12.504054 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:12.504167 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:12.504167 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:12.504167 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:12.505366 master-0 kubenswrapper[7480]: I0308 22:12:12.504172 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:12.782212 master-0 kubenswrapper[7480]: I0308 22:12:12.781948 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:12:12.782603 master-0 kubenswrapper[7480]: E0308 22:12:12.782529 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(5bd68ed75dc57765fa56dbf42c892ba9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" Mar 08 22:12:13.504619 master-0 kubenswrapper[7480]: I0308 22:12:13.504471 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:13.504619 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:13.504619 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:13.504619 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:13.504619 master-0 kubenswrapper[7480]: I0308 22:12:13.504572 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:13.609050 master-0 kubenswrapper[7480]: E0308 22:12:13.608173 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 08 22:12:14.265550 master-0 kubenswrapper[7480]: E0308 22:12:14.265347 7480 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189afd1709bfeb95 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:07:58.713359253 +0000 UTC m=+629.166979895,LastTimestamp:2026-03-08 22:07:58.713359253 +0000 UTC m=+629.166979895,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:12:14.504098 master-0 kubenswrapper[7480]: I0308 22:12:14.503985 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:14.504098 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:14.504098 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:14.504098 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:14.504098 master-0 kubenswrapper[7480]: I0308 22:12:14.504103 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:15.504419 master-0 kubenswrapper[7480]: I0308 22:12:15.504328 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:15.504419 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:15.504419 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:15.504419 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:15.504419 master-0 kubenswrapper[7480]: I0308 22:12:15.504418 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:16.504682 master-0 kubenswrapper[7480]: I0308 22:12:16.503776 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:16.504682 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:16.504682 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:16.504682 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:16.504682 master-0 kubenswrapper[7480]: I0308 22:12:16.503897 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:17.355549 master-0 kubenswrapper[7480]: I0308 22:12:17.355489 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg_d9fe466f-5a23-4f69-8a96-44bd5d6194f5/cluster-autoscaler-operator/0.log" Mar 08 22:12:17.356819 master-0 kubenswrapper[7480]: I0308 22:12:17.356749 7480 generic.go:334] "Generic (PLEG): container finished" podID="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" containerID="d28b9b684de2ee6afb8af986b004969105b39b6920f35f943824b725390ab335" exitCode=255 Mar 08 22:12:17.356934 master-0 kubenswrapper[7480]: I0308 22:12:17.356831 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerDied","Data":"d28b9b684de2ee6afb8af986b004969105b39b6920f35f943824b725390ab335"} Mar 08 22:12:17.357688 master-0 kubenswrapper[7480]: I0308 22:12:17.357636 7480 scope.go:117] "RemoveContainer" containerID="d28b9b684de2ee6afb8af986b004969105b39b6920f35f943824b725390ab335" Mar 08 22:12:17.480243 master-0 kubenswrapper[7480]: E0308 22:12:17.480201 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:12:17.503743 master-0 kubenswrapper[7480]: I0308 22:12:17.503696 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:17.503743 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:17.503743 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:17.503743 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:17.503904 master-0 kubenswrapper[7480]: I0308 22:12:17.503770 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:18.370809 master-0 kubenswrapper[7480]: I0308 22:12:18.370734 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg_d9fe466f-5a23-4f69-8a96-44bd5d6194f5/cluster-autoscaler-operator/0.log" Mar 08 22:12:18.371699 master-0 kubenswrapper[7480]: I0308 22:12:18.371528 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"fc58edc3bf36ea26582cbc3848716e910d5b68321e838b246c7ee1964f56327e"} Mar 08 22:12:18.503751 master-0 kubenswrapper[7480]: I0308 22:12:18.503674 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:18.503751 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:18.503751 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:18.503751 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:18.504230 master-0 kubenswrapper[7480]: I0308 22:12:18.503767 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:18.781771 master-0 kubenswrapper[7480]: I0308 22:12:18.781557 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:12:18.782176 master-0 kubenswrapper[7480]: E0308 22:12:18.782019 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:12:19.503436 master-0 kubenswrapper[7480]: I0308 22:12:19.503345 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:19.503436 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:19.503436 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:19.503436 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:19.504556 master-0 kubenswrapper[7480]: I0308 22:12:19.503446 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:20.504852 master-0 kubenswrapper[7480]: I0308 22:12:20.504545 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:20.504852 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:20.504852 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:20.504852 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:20.504852 master-0 kubenswrapper[7480]: I0308 22:12:20.504709 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:21.421010 master-0 kubenswrapper[7480]: I0308 22:12:21.420866 7480 generic.go:334] "Generic (PLEG): container finished" podID="c228b17c-fd7b-4273-ac03-eac5d4a3a4ad" containerID="9d57fc4d1e08b9fa4f826dec76d98ab4964d370b21a4f1f3de9ac2217b28ef10" exitCode=0 Mar 08 22:12:21.421010 master-0 kubenswrapper[7480]: I0308 22:12:21.420955 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerDied","Data":"9d57fc4d1e08b9fa4f826dec76d98ab4964d370b21a4f1f3de9ac2217b28ef10"} Mar 08 22:12:21.421894 master-0 kubenswrapper[7480]: I0308 22:12:21.421833 7480 scope.go:117] "RemoveContainer" containerID="9d57fc4d1e08b9fa4f826dec76d98ab4964d370b21a4f1f3de9ac2217b28ef10" Mar 08 22:12:21.503857 master-0 kubenswrapper[7480]: I0308 22:12:21.503717 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:21.503857 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:21.503857 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:21.503857 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:21.503857 master-0 kubenswrapper[7480]: I0308 22:12:21.503816 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:22.432714 master-0 kubenswrapper[7480]: I0308 22:12:22.432578 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerStarted","Data":"aad05a87d233cdf378ab6db7c4437a4abb7ff79cc2a7f29656bb2dfe1e7561c4"} Mar 08 22:12:22.503279 master-0 kubenswrapper[7480]: I0308 22:12:22.503200 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:22.503279 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:22.503279 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:22.503279 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:22.503279 master-0 kubenswrapper[7480]: I0308 22:12:22.503273 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:23.504151 master-0 kubenswrapper[7480]: I0308 22:12:23.504024 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:23.504151 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:23.504151 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:23.504151 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:23.505467 master-0 kubenswrapper[7480]: I0308 22:12:23.504184 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:23.608629 master-0 kubenswrapper[7480]: E0308 22:12:23.608523 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:24.503873 master-0 kubenswrapper[7480]: I0308 22:12:24.503748 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:24.503873 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:24.503873 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:24.503873 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:24.504704 master-0 kubenswrapper[7480]: I0308 22:12:24.503926 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:24.781907 master-0 kubenswrapper[7480]: I0308 22:12:24.781630 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:12:24.782323 master-0 kubenswrapper[7480]: E0308 22:12:24.782215 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(5bd68ed75dc57765fa56dbf42c892ba9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" Mar 08 22:12:25.458366 master-0 kubenswrapper[7480]: I0308 22:12:25.458269 7480 generic.go:334] "Generic (PLEG): container finished" podID="4382d186-34e4-40af-9b92-bb17ddcaa23f" containerID="41b89fabe8bcfa93d37c680741df23c997dd23bfef1e93509706508b89ba3e17" exitCode=0 Mar 08 22:12:25.458741 master-0 kubenswrapper[7480]: I0308 22:12:25.458563 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerDied","Data":"41b89fabe8bcfa93d37c680741df23c997dd23bfef1e93509706508b89ba3e17"} Mar 08 22:12:25.458822 master-0 kubenswrapper[7480]: I0308 22:12:25.458733 7480 scope.go:117] "RemoveContainer" containerID="939aa1886a91ab1eb51e8a1cf13c57622098c7bede001e5d513bea76546b85fa" Mar 08 22:12:25.459537 master-0 kubenswrapper[7480]: I0308 22:12:25.459482 7480 scope.go:117] "RemoveContainer" containerID="41b89fabe8bcfa93d37c680741df23c997dd23bfef1e93509706508b89ba3e17" Mar 08 22:12:25.504140 master-0 kubenswrapper[7480]: I0308 22:12:25.504008 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:25.504140 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:25.504140 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:25.504140 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:25.505039 master-0 kubenswrapper[7480]: I0308 22:12:25.504139 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:26.470086 master-0 kubenswrapper[7480]: I0308 22:12:26.470017 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerStarted","Data":"20e77c441ee0dc697e66d86d013ee46d26feb16aaeeb7f34f104d5c3fdb5ce81"} Mar 08 22:12:26.473429 master-0 kubenswrapper[7480]: I0308 22:12:26.473401 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/4.log" Mar 08 22:12:26.474206 master-0 kubenswrapper[7480]: I0308 22:12:26.474159 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/3.log" Mar 08 22:12:26.474273 master-0 kubenswrapper[7480]: I0308 22:12:26.474226 7480 generic.go:334] "Generic (PLEG): container finished" podID="c901b468-b8e9-48f8-8050-0d54e24e2adb" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" exitCode=1 Mar 08 22:12:26.474309 master-0 kubenswrapper[7480]: I0308 22:12:26.474265 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerDied","Data":"2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4"} Mar 08 22:12:26.474342 master-0 kubenswrapper[7480]: I0308 22:12:26.474309 7480 scope.go:117] "RemoveContainer" containerID="d74470db0f0dbce9d14695f1d68e008bcfbbf4781712d0e2ba9a149fa469dffb" Mar 08 22:12:26.474877 master-0 kubenswrapper[7480]: I0308 22:12:26.474853 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:12:26.475151 master-0 kubenswrapper[7480]: E0308 22:12:26.475109 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:12:26.504948 master-0 kubenswrapper[7480]: I0308 22:12:26.504867 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:26.504948 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:26.504948 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:26.504948 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:26.507598 master-0 kubenswrapper[7480]: I0308 22:12:26.507486 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:27.488415 master-0 kubenswrapper[7480]: I0308 22:12:27.488331 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/4.log" Mar 08 22:12:27.503662 master-0 kubenswrapper[7480]: I0308 22:12:27.503581 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:27.503662 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:27.503662 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:27.503662 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:27.504008 master-0 kubenswrapper[7480]: I0308 22:12:27.503684 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:28.504478 master-0 kubenswrapper[7480]: I0308 22:12:28.504390 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:28.504478 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:28.504478 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:28.504478 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:28.505613 master-0 kubenswrapper[7480]: I0308 22:12:28.504520 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:29.506520 master-0 kubenswrapper[7480]: I0308 22:12:29.506394 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:29.506520 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:29.506520 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:29.506520 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:29.507624 master-0 kubenswrapper[7480]: I0308 22:12:29.506549 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:30.504189 master-0 kubenswrapper[7480]: I0308 22:12:30.504124 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:30.504189 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:30.504189 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:30.504189 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:30.504657 master-0 kubenswrapper[7480]: I0308 22:12:30.504208 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:30.514604 master-0 kubenswrapper[7480]: I0308 22:12:30.514525 7480 generic.go:334] "Generic (PLEG): container finished" podID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerID="2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39" exitCode=0 Mar 08 22:12:30.514976 master-0 kubenswrapper[7480]: I0308 22:12:30.514615 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerDied","Data":"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39"} Mar 08 22:12:30.515372 master-0 kubenswrapper[7480]: I0308 22:12:30.515343 7480 scope.go:117] "RemoveContainer" containerID="2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39" Mar 08 22:12:31.502693 master-0 kubenswrapper[7480]: I0308 22:12:31.502611 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:31.502693 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:31.502693 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:31.502693 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:31.503164 master-0 kubenswrapper[7480]: I0308 22:12:31.502707 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:31.538314 master-0 kubenswrapper[7480]: I0308 22:12:31.538208 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerStarted","Data":"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31"} Mar 08 22:12:31.539059 master-0 kubenswrapper[7480]: I0308 22:12:31.538911 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:12:31.781747 master-0 kubenswrapper[7480]: I0308 22:12:31.781593 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:12:31.782101 master-0 kubenswrapper[7480]: E0308 22:12:31.782027 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:12:32.504225 master-0 kubenswrapper[7480]: I0308 22:12:32.504095 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:32.504225 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:32.504225 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:32.504225 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:32.504730 master-0 kubenswrapper[7480]: I0308 22:12:32.504231 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:32.538884 master-0 kubenswrapper[7480]: I0308 22:12:32.538807 7480 patch_prober.go:28] interesting pod/route-controller-manager-86888d445f-7f74k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:12:32.539648 master-0 kubenswrapper[7480]: I0308 22:12:32.538913 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:32.549184 master-0 kubenswrapper[7480]: I0308 22:12:32.549126 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-x5zxr_be431b74-1116-4b0f-8b25-bbb0408411b0/package-server-manager/0.log" Mar 08 22:12:32.549746 master-0 kubenswrapper[7480]: I0308 22:12:32.549685 7480 generic.go:334] "Generic (PLEG): container finished" podID="be431b74-1116-4b0f-8b25-bbb0408411b0" containerID="337d76d1f849217e44f712b0d4de222e21178a127e60c214aafe729c50460441" exitCode=1 Mar 08 22:12:32.549829 master-0 kubenswrapper[7480]: I0308 22:12:32.549742 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerDied","Data":"337d76d1f849217e44f712b0d4de222e21178a127e60c214aafe729c50460441"} Mar 08 22:12:32.550928 master-0 kubenswrapper[7480]: I0308 22:12:32.550890 7480 scope.go:117] "RemoveContainer" containerID="337d76d1f849217e44f712b0d4de222e21178a127e60c214aafe729c50460441" Mar 08 22:12:32.832168 master-0 kubenswrapper[7480]: I0308 22:12:32.832113 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:12:32.832556 master-0 kubenswrapper[7480]: I0308 22:12:32.832518 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:12:33.503211 master-0 kubenswrapper[7480]: I0308 22:12:33.503121 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:33.503211 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:33.503211 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:33.503211 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:33.504435 master-0 kubenswrapper[7480]: I0308 22:12:33.503255 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:33.550465 master-0 kubenswrapper[7480]: I0308 22:12:33.550361 7480 patch_prober.go:28] interesting pod/route-controller-manager-86888d445f-7f74k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:12:33.551193 master-0 kubenswrapper[7480]: I0308 22:12:33.550515 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:33.561203 master-0 kubenswrapper[7480]: I0308 22:12:33.561160 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-x5zxr_be431b74-1116-4b0f-8b25-bbb0408411b0/package-server-manager/0.log" Mar 08 22:12:33.561709 master-0 kubenswrapper[7480]: I0308 22:12:33.561656 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"c19e41ea10eeb91865413a7a2a10341b501fd30a392251483cdaa631d3ce1ad4"} Mar 08 22:12:33.562260 master-0 kubenswrapper[7480]: I0308 22:12:33.562234 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:12:33.610253 master-0 kubenswrapper[7480]: E0308 22:12:33.610187 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:34.481993 master-0 kubenswrapper[7480]: E0308 22:12:34.481858 7480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 22:12:34.504829 master-0 kubenswrapper[7480]: I0308 22:12:34.504687 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:34.504829 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:34.504829 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:34.504829 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:34.505357 master-0 kubenswrapper[7480]: I0308 22:12:34.504847 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:34.961597 master-0 kubenswrapper[7480]: I0308 22:12:34.961484 7480 patch_prober.go:28] interesting pod/route-controller-manager-86888d445f-7f74k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.47:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:12:34.962584 master-0 kubenswrapper[7480]: I0308 22:12:34.961628 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.47:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:35.504859 master-0 kubenswrapper[7480]: I0308 22:12:35.504763 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:35.504859 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:35.504859 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:35.504859 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:35.505378 master-0 kubenswrapper[7480]: I0308 22:12:35.504885 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:36.503766 master-0 kubenswrapper[7480]: I0308 22:12:36.503667 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:36.503766 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:36.503766 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:36.503766 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:36.504798 master-0 kubenswrapper[7480]: I0308 22:12:36.503783 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:37.504880 master-0 kubenswrapper[7480]: I0308 22:12:37.504809 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:37.504880 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:37.504880 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:37.504880 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:37.505839 master-0 kubenswrapper[7480]: I0308 22:12:37.504896 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:37.597064 master-0 kubenswrapper[7480]: I0308 22:12:37.596988 7480 generic.go:334] "Generic (PLEG): container finished" podID="3e38e989-41b8-4c80-99fb-8d414dda5da1" containerID="6ed8d9b29a081602db7df52fa208e1ced8636f34e50cd9dbcb9d6a6d48cd183e" exitCode=0 Mar 08 22:12:37.597352 master-0 kubenswrapper[7480]: I0308 22:12:37.597093 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerDied","Data":"6ed8d9b29a081602db7df52fa208e1ced8636f34e50cd9dbcb9d6a6d48cd183e"} Mar 08 22:12:37.597824 master-0 kubenswrapper[7480]: I0308 22:12:37.597785 7480 scope.go:117] "RemoveContainer" containerID="6ed8d9b29a081602db7df52fa208e1ced8636f34e50cd9dbcb9d6a6d48cd183e" Mar 08 22:12:37.781491 master-0 kubenswrapper[7480]: I0308 22:12:37.781357 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:12:37.782153 master-0 kubenswrapper[7480]: I0308 22:12:37.782104 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:12:37.782477 master-0 kubenswrapper[7480]: E0308 22:12:37.782440 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:12:38.504221 master-0 kubenswrapper[7480]: I0308 22:12:38.503954 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:12:38.504221 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:12:38.504221 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:12:38.504221 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:12:38.504221 master-0 kubenswrapper[7480]: I0308 22:12:38.504121 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:12:38.504221 master-0 kubenswrapper[7480]: I0308 22:12:38.504202 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:12:38.505271 master-0 kubenswrapper[7480]: I0308 22:12:38.505201 7480 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7"} pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" containerMessage="Container router failed startup probe, will be restarted" Mar 08 22:12:38.506067 master-0 kubenswrapper[7480]: I0308 22:12:38.505275 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" containerID="cri-o://b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7" gracePeriod=3600 Mar 08 22:12:38.611707 master-0 kubenswrapper[7480]: I0308 22:12:38.611634 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"2c31cb3fb4a5626349fa3efde605472409d0006c56bde3665977151422412956"} Mar 08 22:12:38.614047 master-0 kubenswrapper[7480]: I0308 22:12:38.614012 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/3.log" Mar 08 22:12:38.616904 master-0 kubenswrapper[7480]: I0308 22:12:38.616858 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:12:38.617000 master-0 kubenswrapper[7480]: I0308 22:12:38.616926 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5bd68ed75dc57765fa56dbf42c892ba9","Type":"ContainerStarted","Data":"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24"} Mar 08 22:12:39.432928 master-0 kubenswrapper[7480]: I0308 22:12:39.432786 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:12:39.433273 master-0 kubenswrapper[7480]: I0308 22:12:39.433016 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:12:42.433434 master-0 kubenswrapper[7480]: I0308 22:12:42.433346 7480 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:12:42.433434 master-0 kubenswrapper[7480]: I0308 22:12:42.433426 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:43.372551 master-0 kubenswrapper[7480]: I0308 22:12:43.372492 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 08 22:12:43.372803 master-0 kubenswrapper[7480]: E0308 22:12:43.372783 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" Mar 08 22:12:43.372852 master-0 kubenswrapper[7480]: I0308 22:12:43.372803 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" Mar 08 22:12:43.372883 master-0 kubenswrapper[7480]: E0308 22:12:43.372851 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerName="installer" Mar 08 22:12:43.372883 master-0 kubenswrapper[7480]: I0308 22:12:43.372860 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerName="installer" Mar 08 22:12:43.372939 master-0 kubenswrapper[7480]: E0308 22:12:43.372911 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerName="installer" Mar 08 22:12:43.372939 master-0 kubenswrapper[7480]: I0308 22:12:43.372921 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerName="installer" Mar 08 22:12:43.372994 master-0 kubenswrapper[7480]: E0308 22:12:43.372944 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="multus-admission-controller" Mar 08 22:12:43.372994 master-0 kubenswrapper[7480]: I0308 22:12:43.372956 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="multus-admission-controller" Mar 08 22:12:43.372994 master-0 kubenswrapper[7480]: E0308 22:12:43.372966 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="kube-rbac-proxy" Mar 08 22:12:43.372994 master-0 kubenswrapper[7480]: I0308 22:12:43.372975 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="kube-rbac-proxy" Mar 08 22:12:43.373180 master-0 kubenswrapper[7480]: I0308 22:12:43.373162 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerName="installer" Mar 08 22:12:43.373218 master-0 kubenswrapper[7480]: I0308 22:12:43.373186 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="multus-admission-controller" Mar 08 22:12:43.373218 master-0 kubenswrapper[7480]: I0308 22:12:43.373204 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="7147d808-f9a2-434c-ae54-77d82a3d2e1f" containerName="kube-multus-additional-cni-plugins" Mar 08 22:12:43.373285 master-0 kubenswrapper[7480]: I0308 22:12:43.373221 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerName="installer" Mar 08 22:12:43.373285 master-0 kubenswrapper[7480]: I0308 22:12:43.373249 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfc8afd-2330-46a4-ae5b-36522102b332" containerName="kube-rbac-proxy" Mar 08 22:12:43.373771 master-0 kubenswrapper[7480]: I0308 22:12:43.373749 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.380919 master-0 kubenswrapper[7480]: I0308 22:12:43.380770 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v7cvh" Mar 08 22:12:43.381114 master-0 kubenswrapper[7480]: I0308 22:12:43.381006 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 22:12:43.381433 master-0 kubenswrapper[7480]: I0308 22:12:43.381379 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 08 22:12:43.387577 master-0 kubenswrapper[7480]: I0308 22:12:43.383819 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.387577 master-0 kubenswrapper[7480]: I0308 22:12:43.387567 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 08 22:12:43.393002 master-0 kubenswrapper[7480]: I0308 22:12:43.392938 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7bxvk" Mar 08 22:12:43.404135 master-0 kubenswrapper[7480]: I0308 22:12:43.403809 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 08 22:12:43.406135 master-0 kubenswrapper[7480]: I0308 22:12:43.406097 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 08 22:12:43.425058 master-0 kubenswrapper[7480]: I0308 22:12:43.424954 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.425504 master-0 kubenswrapper[7480]: I0308 22:12:43.425480 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.425621 master-0 kubenswrapper[7480]: I0308 22:12:43.425603 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.425733 master-0 kubenswrapper[7480]: I0308 22:12:43.425715 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-var-lock\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.425858 master-0 kubenswrapper[7480]: I0308 22:12:43.425840 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.425984 master-0 kubenswrapper[7480]: I0308 22:12:43.425967 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.527445 master-0 kubenswrapper[7480]: I0308 22:12:43.527327 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527460 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527494 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527531 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-var-lock\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527567 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527600 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527596 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527694 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527708 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-var-lock\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.528044 master-0 kubenswrapper[7480]: I0308 22:12:43.527721 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.556311 master-0 kubenswrapper[7480]: I0308 22:12:43.556221 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.556476 master-0 kubenswrapper[7480]: I0308 22:12:43.556374 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:43.610592 master-0 kubenswrapper[7480]: E0308 22:12:43.610539 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:43.610858 master-0 kubenswrapper[7480]: E0308 22:12:43.610839 7480 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:12:43.727035 master-0 kubenswrapper[7480]: I0308 22:12:43.726883 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:12:43.764303 master-0 kubenswrapper[7480]: I0308 22:12:43.763782 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:12:44.177505 master-0 kubenswrapper[7480]: I0308 22:12:44.177471 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 08 22:12:44.288458 master-0 kubenswrapper[7480]: I0308 22:12:44.287442 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 08 22:12:44.685447 master-0 kubenswrapper[7480]: I0308 22:12:44.685227 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f0e851e2-74fc-4f4c-b907-3c9158c59cd4","Type":"ContainerStarted","Data":"5ccbb8ad117a453ccde6adce287311d7e602ee66003c156725015647e77006f5"} Mar 08 22:12:44.685447 master-0 kubenswrapper[7480]: I0308 22:12:44.685294 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f0e851e2-74fc-4f4c-b907-3c9158c59cd4","Type":"ContainerStarted","Data":"7806b893b20c55d1f8afd2a7c71328b4f99e83bbf86148341ea260ee8e9271b9"} Mar 08 22:12:44.687699 master-0 kubenswrapper[7480]: I0308 22:12:44.687373 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"17d5d8c1-55a9-484d-aca8-6563dfcd4e30","Type":"ContainerStarted","Data":"3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e"} Mar 08 22:12:44.687699 master-0 kubenswrapper[7480]: I0308 22:12:44.687402 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"17d5d8c1-55a9-484d-aca8-6563dfcd4e30","Type":"ContainerStarted","Data":"63db28b3ba13f9e9a35ba9cca7260ce6eca529ffba25a33a192bc8a1c9c0d6e8"} Mar 08 22:12:44.736166 master-0 kubenswrapper[7480]: I0308 22:12:44.735980 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.7359582740000001 podStartE2EDuration="1.735958274s" podCreationTimestamp="2026-03-08 22:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:12:44.707107007 +0000 UTC m=+915.160727609" watchObservedRunningTime="2026-03-08 22:12:44.735958274 +0000 UTC m=+915.189578876" Mar 08 22:12:44.736166 master-0 kubenswrapper[7480]: I0308 22:12:44.736137 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" podStartSLOduration=1.736132768 podStartE2EDuration="1.736132768s" podCreationTimestamp="2026-03-08 22:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:12:44.733103391 +0000 UTC m=+915.186724013" watchObservedRunningTime="2026-03-08 22:12:44.736132768 +0000 UTC m=+915.189753370" Mar 08 22:12:44.961897 master-0 kubenswrapper[7480]: I0308 22:12:44.961767 7480 patch_prober.go:28] interesting pod/route-controller-manager-86888d445f-7f74k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 22:12:44.961897 master-0 kubenswrapper[7480]: I0308 22:12:44.961879 7480 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 22:12:46.293149 master-0 kubenswrapper[7480]: I0308 22:12:46.293065 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 08 22:12:46.702518 master-0 kubenswrapper[7480]: I0308 22:12:46.702449 7480 generic.go:334] "Generic (PLEG): container finished" podID="de89c423-0f2a-440f-9fa9-92fefea84b09" containerID="c1e691e59e7c1bed851b1abd3631d646daa0cf480534e0faeca027a9151c11dc" exitCode=0 Mar 08 22:12:46.702518 master-0 kubenswrapper[7480]: I0308 22:12:46.702492 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerDied","Data":"c1e691e59e7c1bed851b1abd3631d646daa0cf480534e0faeca027a9151c11dc"} Mar 08 22:12:46.703194 master-0 kubenswrapper[7480]: I0308 22:12:46.703167 7480 scope.go:117] "RemoveContainer" containerID="c1e691e59e7c1bed851b1abd3631d646daa0cf480534e0faeca027a9151c11dc" Mar 08 22:12:46.705037 master-0 kubenswrapper[7480]: I0308 22:12:46.704987 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4lpf_2851c096-f5cb-4a46-a5a0-ac0b1341033b/cluster-node-tuning-operator/0.log" Mar 08 22:12:46.705037 master-0 kubenswrapper[7480]: I0308 22:12:46.705025 7480 generic.go:334] "Generic (PLEG): container finished" podID="2851c096-f5cb-4a46-a5a0-ac0b1341033b" containerID="9a488623b815fc824bec74857e2960fc417072b53ab920bd8c886dd1a94fa5ae" exitCode=1 Mar 08 22:12:46.705187 master-0 kubenswrapper[7480]: I0308 22:12:46.705088 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerDied","Data":"9a488623b815fc824bec74857e2960fc417072b53ab920bd8c886dd1a94fa5ae"} Mar 08 22:12:46.705350 master-0 kubenswrapper[7480]: I0308 22:12:46.705327 7480 scope.go:117] "RemoveContainer" containerID="9a488623b815fc824bec74857e2960fc417072b53ab920bd8c886dd1a94fa5ae" Mar 08 22:12:46.707021 master-0 kubenswrapper[7480]: I0308 22:12:46.706999 7480 generic.go:334] "Generic (PLEG): container finished" podID="f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9" containerID="a22b29816e03690faf00c5c6d5f7ea0b06750cd2c50fe9f666b86154f5e12d55" exitCode=0 Mar 08 22:12:46.707084 master-0 kubenswrapper[7480]: I0308 22:12:46.707048 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerDied","Data":"a22b29816e03690faf00c5c6d5f7ea0b06750cd2c50fe9f666b86154f5e12d55"} Mar 08 22:12:46.707351 master-0 kubenswrapper[7480]: I0308 22:12:46.707331 7480 scope.go:117] "RemoveContainer" containerID="a22b29816e03690faf00c5c6d5f7ea0b06750cd2c50fe9f666b86154f5e12d55" Mar 08 22:12:46.715785 master-0 kubenswrapper[7480]: I0308 22:12:46.715732 7480 generic.go:334] "Generic (PLEG): container finished" podID="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" containerID="f871c547308cba5a44237c75ff4479c8163cef5b1e2a7ff5964a521c14faec67" exitCode=0 Mar 08 22:12:46.715913 master-0 kubenswrapper[7480]: I0308 22:12:46.715797 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerDied","Data":"f871c547308cba5a44237c75ff4479c8163cef5b1e2a7ff5964a521c14faec67"} Mar 08 22:12:46.715913 master-0 kubenswrapper[7480]: I0308 22:12:46.715828 7480 scope.go:117] "RemoveContainer" containerID="00c5ed3578644c2cfcf3b05743187fa1a4e66cf46b816a9e956e779028d0b36b" Mar 08 22:12:46.716321 master-0 kubenswrapper[7480]: I0308 22:12:46.716288 7480 scope.go:117] "RemoveContainer" containerID="f871c547308cba5a44237c75ff4479c8163cef5b1e2a7ff5964a521c14faec67" Mar 08 22:12:46.721461 master-0 kubenswrapper[7480]: I0308 22:12:46.721418 7480 generic.go:334] "Generic (PLEG): container finished" podID="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" containerID="539c0747d69e37b439f9d78ced15438e6d882433e87666140b9b0adafe3b7125" exitCode=0 Mar 08 22:12:46.721539 master-0 kubenswrapper[7480]: I0308 22:12:46.721499 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerDied","Data":"539c0747d69e37b439f9d78ced15438e6d882433e87666140b9b0adafe3b7125"} Mar 08 22:12:46.722292 master-0 kubenswrapper[7480]: I0308 22:12:46.722252 7480 scope.go:117] "RemoveContainer" containerID="539c0747d69e37b439f9d78ced15438e6d882433e87666140b9b0adafe3b7125" Mar 08 22:12:46.731523 master-0 kubenswrapper[7480]: I0308 22:12:46.731162 7480 generic.go:334] "Generic (PLEG): container finished" podID="f6fbc12f-3c27-4a7a-933f-43a55c960335" containerID="9e2fd1210b8809e9723f044551eadfefcc58034be22d2af001446424e236d937" exitCode=0 Mar 08 22:12:46.731523 master-0 kubenswrapper[7480]: I0308 22:12:46.731246 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerDied","Data":"9e2fd1210b8809e9723f044551eadfefcc58034be22d2af001446424e236d937"} Mar 08 22:12:46.731665 master-0 kubenswrapper[7480]: I0308 22:12:46.731637 7480 scope.go:117] "RemoveContainer" containerID="9e2fd1210b8809e9723f044551eadfefcc58034be22d2af001446424e236d937" Mar 08 22:12:46.737495 master-0 kubenswrapper[7480]: I0308 22:12:46.737444 7480 generic.go:334] "Generic (PLEG): container finished" podID="b6bc6f78-2c5c-4add-925f-f6568a49c2cc" containerID="ea9d698fbce1d205747d5157a6c744e1ac0246ad5c16718bbe3cc568d31c44f2" exitCode=0 Mar 08 22:12:46.737620 master-0 kubenswrapper[7480]: I0308 22:12:46.737530 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerDied","Data":"ea9d698fbce1d205747d5157a6c744e1ac0246ad5c16718bbe3cc568d31c44f2"} Mar 08 22:12:46.738300 master-0 kubenswrapper[7480]: I0308 22:12:46.738266 7480 scope.go:117] "RemoveContainer" containerID="ea9d698fbce1d205747d5157a6c744e1ac0246ad5c16718bbe3cc568d31c44f2" Mar 08 22:12:46.742969 master-0 kubenswrapper[7480]: I0308 22:12:46.742841 7480 generic.go:334] "Generic (PLEG): container finished" podID="37bf82cb-adea-46d3-a899-136eb1d1f292" containerID="04944f14b53d02d121f70fd7c26fd29d16bc18bb4704e5d81fc7ee613027b6bb" exitCode=0 Mar 08 22:12:46.742969 master-0 kubenswrapper[7480]: I0308 22:12:46.742894 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerDied","Data":"04944f14b53d02d121f70fd7c26fd29d16bc18bb4704e5d81fc7ee613027b6bb"} Mar 08 22:12:46.743313 master-0 kubenswrapper[7480]: I0308 22:12:46.743235 7480 scope.go:117] "RemoveContainer" containerID="04944f14b53d02d121f70fd7c26fd29d16bc18bb4704e5d81fc7ee613027b6bb" Mar 08 22:12:46.747035 master-0 kubenswrapper[7480]: I0308 22:12:46.746983 7480 generic.go:334] "Generic (PLEG): container finished" podID="e8ef68b9-6f8d-4697-b269-91ee4e310752" containerID="3724b6db595f74186edc6baea18527f6eae9fe894eef0ca447fc3a5e5c129bfc" exitCode=0 Mar 08 22:12:46.747127 master-0 kubenswrapper[7480]: I0308 22:12:46.747051 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerDied","Data":"3724b6db595f74186edc6baea18527f6eae9fe894eef0ca447fc3a5e5c129bfc"} Mar 08 22:12:46.747578 master-0 kubenswrapper[7480]: I0308 22:12:46.747552 7480 scope.go:117] "RemoveContainer" containerID="3724b6db595f74186edc6baea18527f6eae9fe894eef0ca447fc3a5e5c129bfc" Mar 08 22:12:46.750217 master-0 kubenswrapper[7480]: I0308 22:12:46.750184 7480 generic.go:334] "Generic (PLEG): container finished" podID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerID="85d980d0ad1f366d812777a55826b75d7182615f3739f55dd1c63103d4d0380c" exitCode=0 Mar 08 22:12:46.750272 master-0 kubenswrapper[7480]: I0308 22:12:46.750254 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerDied","Data":"85d980d0ad1f366d812777a55826b75d7182615f3739f55dd1c63103d4d0380c"} Mar 08 22:12:46.750763 master-0 kubenswrapper[7480]: I0308 22:12:46.750728 7480 scope.go:117] "RemoveContainer" containerID="85d980d0ad1f366d812777a55826b75d7182615f3739f55dd1c63103d4d0380c" Mar 08 22:12:46.753487 master-0 kubenswrapper[7480]: I0308 22:12:46.753458 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-znt8q_a21e2296-10cb-4c70-ac3e-2173d35faac4/network-operator/0.log" Mar 08 22:12:46.753551 master-0 kubenswrapper[7480]: I0308 22:12:46.753510 7480 generic.go:334] "Generic (PLEG): container finished" podID="a21e2296-10cb-4c70-ac3e-2173d35faac4" containerID="d653a3f99cf80e74726e1b1340ca117861fb6803c0c0eb0b6d0a40207c074c3a" exitCode=0 Mar 08 22:12:46.753583 master-0 kubenswrapper[7480]: I0308 22:12:46.753543 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerDied","Data":"d653a3f99cf80e74726e1b1340ca117861fb6803c0c0eb0b6d0a40207c074c3a"} Mar 08 22:12:46.753966 master-0 kubenswrapper[7480]: I0308 22:12:46.753931 7480 scope.go:117] "RemoveContainer" containerID="d653a3f99cf80e74726e1b1340ca117861fb6803c0c0eb0b6d0a40207c074c3a" Mar 08 22:12:46.756091 master-0 kubenswrapper[7480]: I0308 22:12:46.756046 7480 generic.go:334] "Generic (PLEG): container finished" podID="a8e00c74-fb72-4e3d-a22c-c38a4772a813" containerID="e72afc2085d471295428d0c6e91b91b2d9a4e2a26d7688d062fbd6d0d26453eb" exitCode=0 Mar 08 22:12:46.756220 master-0 kubenswrapper[7480]: I0308 22:12:46.756166 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerDied","Data":"e72afc2085d471295428d0c6e91b91b2d9a4e2a26d7688d062fbd6d0d26453eb"} Mar 08 22:12:46.757140 master-0 kubenswrapper[7480]: I0308 22:12:46.757120 7480 scope.go:117] "RemoveContainer" containerID="e72afc2085d471295428d0c6e91b91b2d9a4e2a26d7688d062fbd6d0d26453eb" Mar 08 22:12:46.761152 master-0 kubenswrapper[7480]: I0308 22:12:46.761027 7480 generic.go:334] "Generic (PLEG): container finished" podID="971ffa86-4d52-4dc3-ba28-03d116ec3494" containerID="876653e3eaf25a649c1577e2202b14fc9e4231bce10bcb04ae36299b1eb1609e" exitCode=0 Mar 08 22:12:46.761221 master-0 kubenswrapper[7480]: I0308 22:12:46.761136 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerDied","Data":"876653e3eaf25a649c1577e2202b14fc9e4231bce10bcb04ae36299b1eb1609e"} Mar 08 22:12:46.761926 master-0 kubenswrapper[7480]: I0308 22:12:46.761889 7480 scope.go:117] "RemoveContainer" containerID="876653e3eaf25a649c1577e2202b14fc9e4231bce10bcb04ae36299b1eb1609e" Mar 08 22:12:46.763625 master-0 kubenswrapper[7480]: I0308 22:12:46.763588 7480 generic.go:334] "Generic (PLEG): container finished" podID="b849f992-1020-4633-98be-75705b962fa9" containerID="8a52489302a5dc96ab51b546dab29cb1d4fff7df453456bacfb9302f4b296bd5" exitCode=0 Mar 08 22:12:46.763708 master-0 kubenswrapper[7480]: I0308 22:12:46.763656 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerDied","Data":"8a52489302a5dc96ab51b546dab29cb1d4fff7df453456bacfb9302f4b296bd5"} Mar 08 22:12:46.763932 master-0 kubenswrapper[7480]: I0308 22:12:46.763899 7480 scope.go:117] "RemoveContainer" containerID="8a52489302a5dc96ab51b546dab29cb1d4fff7df453456bacfb9302f4b296bd5" Mar 08 22:12:46.766994 master-0 kubenswrapper[7480]: I0308 22:12:46.766963 7480 generic.go:334] "Generic (PLEG): container finished" podID="2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8" containerID="566f64e1e5f69c2bf95c8075567ff0feb7dd0877a1f2fce23e6ae2446c0dbdb2" exitCode=0 Mar 08 22:12:46.767065 master-0 kubenswrapper[7480]: I0308 22:12:46.767039 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerDied","Data":"566f64e1e5f69c2bf95c8075567ff0feb7dd0877a1f2fce23e6ae2446c0dbdb2"} Mar 08 22:12:46.767768 master-0 kubenswrapper[7480]: I0308 22:12:46.767705 7480 scope.go:117] "RemoveContainer" containerID="566f64e1e5f69c2bf95c8075567ff0feb7dd0877a1f2fce23e6ae2446c0dbdb2" Mar 08 22:12:46.776259 master-0 kubenswrapper[7480]: I0308 22:12:46.775562 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-x8jg8_d4d01185-e485-4697-92c2-31a044f25d82/openshift-controller-manager-operator/1.log" Mar 08 22:12:46.776259 master-0 kubenswrapper[7480]: I0308 22:12:46.775636 7480 generic.go:334] "Generic (PLEG): container finished" podID="d4d01185-e485-4697-92c2-31a044f25d82" containerID="5af2147c5b6156b079ec16c643f5bc1c46f463b8da9a0f84030507704a3988c2" exitCode=0 Mar 08 22:12:46.776259 master-0 kubenswrapper[7480]: I0308 22:12:46.775736 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerDied","Data":"5af2147c5b6156b079ec16c643f5bc1c46f463b8da9a0f84030507704a3988c2"} Mar 08 22:12:46.776447 master-0 kubenswrapper[7480]: I0308 22:12:46.776409 7480 scope.go:117] "RemoveContainer" containerID="5af2147c5b6156b079ec16c643f5bc1c46f463b8da9a0f84030507704a3988c2" Mar 08 22:12:46.780830 master-0 kubenswrapper[7480]: I0308 22:12:46.780788 7480 generic.go:334] "Generic (PLEG): container finished" podID="d0641333-feda-44c5-baf5-ceee4ce3fd8f" containerID="ba63e07913394038e6214607c806df6fc81079644bc68ca5910ad463422e98db" exitCode=0 Mar 08 22:12:46.780904 master-0 kubenswrapper[7480]: I0308 22:12:46.780863 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerDied","Data":"ba63e07913394038e6214607c806df6fc81079644bc68ca5910ad463422e98db"} Mar 08 22:12:46.781329 master-0 kubenswrapper[7480]: I0308 22:12:46.781290 7480 scope.go:117] "RemoveContainer" containerID="ba63e07913394038e6214607c806df6fc81079644bc68ca5910ad463422e98db" Mar 08 22:12:46.782557 master-0 kubenswrapper[7480]: I0308 22:12:46.782533 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:12:46.782784 master-0 kubenswrapper[7480]: E0308 22:12:46.782756 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:12:46.785102 master-0 kubenswrapper[7480]: I0308 22:12:46.785048 7480 generic.go:334] "Generic (PLEG): container finished" podID="a913c639-ebfc-42a3-85cd-8a460027d3ec" containerID="8bf41d7f7f99e2d4fdb83a25a837511d4994d2551b185499c8662f2b6ce0defe" exitCode=0 Mar 08 22:12:46.785213 master-0 kubenswrapper[7480]: I0308 22:12:46.785107 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerDied","Data":"8bf41d7f7f99e2d4fdb83a25a837511d4994d2551b185499c8662f2b6ce0defe"} Mar 08 22:12:46.786031 master-0 kubenswrapper[7480]: I0308 22:12:46.786011 7480 scope.go:117] "RemoveContainer" containerID="8bf41d7f7f99e2d4fdb83a25a837511d4994d2551b185499c8662f2b6ce0defe" Mar 08 22:12:46.787479 master-0 kubenswrapper[7480]: I0308 22:12:46.787454 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-64gfj_1ef14467-bb62-462d-9dec-dee43e4cc9bd/machine-api-operator/0.log" Mar 08 22:12:46.788257 master-0 kubenswrapper[7480]: I0308 22:12:46.788179 7480 generic.go:334] "Generic (PLEG): container finished" podID="1ef14467-bb62-462d-9dec-dee43e4cc9bd" containerID="8c5935d4c8ced0d1522d2fa823597581df0f0db73a8f0870aa81ef671ab128d8" exitCode=255 Mar 08 22:12:46.788400 master-0 kubenswrapper[7480]: I0308 22:12:46.788365 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" podUID="17d5d8c1-55a9-484d-aca8-6563dfcd4e30" containerName="installer" containerID="cri-o://3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e" gracePeriod=30 Mar 08 22:12:46.788490 master-0 kubenswrapper[7480]: I0308 22:12:46.788460 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerDied","Data":"8c5935d4c8ced0d1522d2fa823597581df0f0db73a8f0870aa81ef671ab128d8"} Mar 08 22:12:46.788928 master-0 kubenswrapper[7480]: I0308 22:12:46.788899 7480 scope.go:117] "RemoveContainer" containerID="8c5935d4c8ced0d1522d2fa823597581df0f0db73a8f0870aa81ef671ab128d8" Mar 08 22:12:47.009953 master-0 kubenswrapper[7480]: I0308 22:12:47.009898 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:12:47.056227 master-0 kubenswrapper[7480]: I0308 22:12:47.056185 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:12:47.396574 master-0 kubenswrapper[7480]: I0308 22:12:47.396398 7480 scope.go:117] "RemoveContainer" containerID="2372290458f059a617f7c34963da0c908f74ff47559433f117b121db9f6a2646" Mar 08 22:12:47.447199 master-0 kubenswrapper[7480]: I0308 22:12:47.447137 7480 scope.go:117] "RemoveContainer" containerID="fa11530abd773575590a911f848030e060ab34b160f17f0ed7e7dadcd26f2550" Mar 08 22:12:47.522134 master-0 kubenswrapper[7480]: I0308 22:12:47.521819 7480 scope.go:117] "RemoveContainer" containerID="a33aa7650397c6fcbc3db8208664515afb6c26ede2b1533a472f078a2d4a0ea4" Mar 08 22:12:47.561008 master-0 kubenswrapper[7480]: I0308 22:12:47.560950 7480 scope.go:117] "RemoveContainer" containerID="33e74f7c7bc9716ac9cd2cfb19a68cc948644c1413dc78e99dffc063fbe5f927" Mar 08 22:12:47.633744 master-0 kubenswrapper[7480]: I0308 22:12:47.633696 7480 scope.go:117] "RemoveContainer" containerID="334ebc87bbf952673cd1b3477f45396aaf813413e807f2bdfa8f48d87bc817d9" Mar 08 22:12:47.699913 master-0 kubenswrapper[7480]: I0308 22:12:47.699883 7480 scope.go:117] "RemoveContainer" containerID="6df6f113522fa49700aeaebc115d4f7bc3c6c606f1453723e6b3427085f53838" Mar 08 22:12:47.732480 master-0 kubenswrapper[7480]: I0308 22:12:47.732365 7480 scope.go:117] "RemoveContainer" containerID="c086cbd7303ffe955bb2645d06594a1046769c847ec0d61ce7c507a7b2e3ee42" Mar 08 22:12:47.772372 master-0 kubenswrapper[7480]: I0308 22:12:47.772219 7480 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:12:47.790112 master-0 kubenswrapper[7480]: I0308 22:12:47.790081 7480 scope.go:117] "RemoveContainer" containerID="2f8d7fcda4e6f52fa1e1bae05fb59e3135aaa4a13581f1a085c1284cb2c0e356" Mar 08 22:12:47.837271 master-0 kubenswrapper[7480]: I0308 22:12:47.837242 7480 scope.go:117] "RemoveContainer" containerID="5606fcf795565b19c0d649668bacd0041a38e917c804757278c207fde8081155" Mar 08 22:12:48.837626 master-0 kubenswrapper[7480]: I0308 22:12:48.837571 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerStarted","Data":"c3d7bacea0e8378e98be2730d885890f020b45654e8e5010663e807c1cff3ed0"} Mar 08 22:12:48.839958 master-0 kubenswrapper[7480]: I0308 22:12:48.839928 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerStarted","Data":"f8e05400c4242a6c2f3881aef7ae629f7a73530a08ee7893c8a1994c2fbd1380"} Mar 08 22:12:48.841549 master-0 kubenswrapper[7480]: I0308 22:12:48.841520 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerStarted","Data":"1788c7772d1b5e51ce597b55bb6c08ca4fa7375d57a8cc22127f6515a7008256"} Mar 08 22:12:48.844195 master-0 kubenswrapper[7480]: I0308 22:12:48.844163 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerStarted","Data":"5abd06ba0394acf60c173784ce356bd55de0949b044321cf96ab684d6d56e529"} Mar 08 22:12:48.844508 master-0 kubenswrapper[7480]: I0308 22:12:48.844448 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:12:48.845953 master-0 kubenswrapper[7480]: I0308 22:12:48.845906 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerStarted","Data":"7c4256342f8aa60d3135288746ca7cb2610fe20800104f7ef53e7de2bba69b10"} Mar 08 22:12:48.847888 master-0 kubenswrapper[7480]: I0308 22:12:48.847859 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4lpf_2851c096-f5cb-4a46-a5a0-ac0b1341033b/cluster-node-tuning-operator/0.log" Mar 08 22:12:48.848005 master-0 kubenswrapper[7480]: I0308 22:12:48.847975 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerStarted","Data":"12e54b9f7ad60e17db8491becafc0de706219d683bbc5ce439f564e679c5111e"} Mar 08 22:12:48.850062 master-0 kubenswrapper[7480]: I0308 22:12:48.850003 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerStarted","Data":"481a6108588ed0bc22920e61a3ef36e394b22655f3f89fa887b0a577e1e9072c"} Mar 08 22:12:48.852276 master-0 kubenswrapper[7480]: I0308 22:12:48.852229 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"7603d2fd881e136012bf1afe42b31760a7ed92da49a974810eb9109c6a3ab95a"} Mar 08 22:12:48.854013 master-0 kubenswrapper[7480]: I0308 22:12:48.853977 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerStarted","Data":"654c0aeae113f0702dd86ff44c39f979b6a8a5065ae564574d931f95b93f01c2"} Mar 08 22:12:48.856608 master-0 kubenswrapper[7480]: I0308 22:12:48.856559 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerStarted","Data":"90b58b468745baab88972adca763ee9422b634b7fff248cdd5da328fd7ce916d"} Mar 08 22:12:48.858151 master-0 kubenswrapper[7480]: I0308 22:12:48.858114 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerStarted","Data":"782960243c6236dea1d6c183e9bbe6b8287c5031207274b6535b2bb6c1a52e4d"} Mar 08 22:12:48.859960 master-0 kubenswrapper[7480]: I0308 22:12:48.859931 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerStarted","Data":"552f289d3f2573263f7433542ba0f3e3e1e112be831b69c090b0709f1ab05697"} Mar 08 22:12:48.861512 master-0 kubenswrapper[7480]: I0308 22:12:48.861480 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerStarted","Data":"2982b8e7f0b4c02167f15f7a02deda31e69764d7a2b76b9065023bb494fe82f3"} Mar 08 22:12:48.864953 master-0 kubenswrapper[7480]: I0308 22:12:48.864925 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerStarted","Data":"55dfb1273df17a71c2face3f2f9b2be8a5c23f1ce2993ebf2043ceaa5c122430"} Mar 08 22:12:48.866877 master-0 kubenswrapper[7480]: I0308 22:12:48.866844 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerStarted","Data":"5edd2120046a6dae48461fa9d5e7e465dc05c369838a5b6f5ef7b51b87e3796a"} Mar 08 22:12:48.868678 master-0 kubenswrapper[7480]: I0308 22:12:48.868646 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"0c6b4b7c21dd8a4b138e3030b88605eb5d06a2cb377b0b36526cac511abff49c"} Mar 08 22:12:48.870221 master-0 kubenswrapper[7480]: I0308 22:12:48.870192 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerStarted","Data":"b7995f2ddd717f62af994a3ce59a3ae7eb1ed5874ee99ffa525ec7853fd36239"} Mar 08 22:12:48.872366 master-0 kubenswrapper[7480]: I0308 22:12:48.872326 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-64gfj_1ef14467-bb62-462d-9dec-dee43e4cc9bd/machine-api-operator/0.log" Mar 08 22:12:48.873122 master-0 kubenswrapper[7480]: I0308 22:12:48.873039 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"b536c467412a6f6e6bc5ac41305e5f93a486d6612aa6809a3738ce81cc84c7e4"} Mar 08 22:12:48.876346 master-0 kubenswrapper[7480]: I0308 22:12:48.876310 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerStarted","Data":"22d88096d73da9ad2e8592e7ffa3873cc4df75c1bfa38aab96c0c93456cc6b9f"} Mar 08 22:12:49.441571 master-0 kubenswrapper[7480]: I0308 22:12:49.441517 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:12:49.448323 master-0 kubenswrapper[7480]: I0308 22:12:49.448273 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:12:49.784346 master-0 kubenswrapper[7480]: I0308 22:12:49.784223 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:12:49.784557 master-0 kubenswrapper[7480]: E0308 22:12:49.784463 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:12:51.135027 master-0 kubenswrapper[7480]: I0308 22:12:51.134929 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 08 22:12:51.136305 master-0 kubenswrapper[7480]: I0308 22:12:51.136228 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.170122 master-0 kubenswrapper[7480]: I0308 22:12:51.165957 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 08 22:12:51.254311 master-0 kubenswrapper[7480]: I0308 22:12:51.254208 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.254687 master-0 kubenswrapper[7480]: I0308 22:12:51.254430 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.254687 master-0 kubenswrapper[7480]: I0308 22:12:51.254501 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.356340 master-0 kubenswrapper[7480]: I0308 22:12:51.356245 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.356732 master-0 kubenswrapper[7480]: I0308 22:12:51.356440 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.356732 master-0 kubenswrapper[7480]: I0308 22:12:51.356512 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.356732 master-0 kubenswrapper[7480]: I0308 22:12:51.356457 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.356972 master-0 kubenswrapper[7480]: I0308 22:12:51.356894 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.376534 master-0 kubenswrapper[7480]: I0308 22:12:51.376470 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.491021 master-0 kubenswrapper[7480]: I0308 22:12:51.490840 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:12:51.999148 master-0 kubenswrapper[7480]: I0308 22:12:51.999019 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 08 22:12:52.006190 master-0 kubenswrapper[7480]: W0308 22:12:52.006114 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod345ca27a_f572_4efa_b0ce_dfa8243becd6.slice/crio-5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002 WatchSource:0}: Error finding container 5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002: Status 404 returned error can't find the container with id 5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002 Mar 08 22:12:52.913351 master-0 kubenswrapper[7480]: I0308 22:12:52.913121 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"345ca27a-f572-4efa-b0ce-dfa8243becd6","Type":"ContainerStarted","Data":"e63666c422a16c752beb8b0b06fe877b0b08af534810c31f0c885141cf9254a6"} Mar 08 22:12:52.913351 master-0 kubenswrapper[7480]: I0308 22:12:52.913217 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"345ca27a-f572-4efa-b0ce-dfa8243becd6","Type":"ContainerStarted","Data":"5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002"} Mar 08 22:12:52.940438 master-0 kubenswrapper[7480]: I0308 22:12:52.940316 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.9402804169999999 podStartE2EDuration="1.940280417s" podCreationTimestamp="2026-03-08 22:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:12:52.938229105 +0000 UTC m=+923.391849767" watchObservedRunningTime="2026-03-08 22:12:52.940280417 +0000 UTC m=+923.393901059" Mar 08 22:12:53.016916 master-0 kubenswrapper[7480]: I0308 22:12:53.016822 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:12:53.968640 master-0 kubenswrapper[7480]: I0308 22:12:53.968506 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:01.782367 master-0 kubenswrapper[7480]: I0308 22:13:01.782236 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:13:01.783049 master-0 kubenswrapper[7480]: E0308 22:13:01.782488 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-cjdgr_openshift-ingress-operator(84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" podUID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" Mar 08 22:13:02.851985 master-0 kubenswrapper[7480]: I0308 22:13:02.851896 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:13:03.783360 master-0 kubenswrapper[7480]: I0308 22:13:03.783290 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:13:03.783699 master-0 kubenswrapper[7480]: E0308 22:13:03.783650 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:13:03.907905 master-0 kubenswrapper[7480]: E0308 22:13:03.907708 7480 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:12:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:12:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:12:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:12:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82ad8d62d92a8cc5e2391e3b0746219bd740cc26741bc7571010d337240fa112\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ec87cd8fce2d3b4e2b15f9abaea232c03ff5a6dd46002ea24418a21973abf220\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167895},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 22:13:05.189223 master-0 kubenswrapper[7480]: I0308 22:13:05.189159 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 22:13:05.190140 master-0 kubenswrapper[7480]: I0308 22:13:05.190115 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.195004 master-0 kubenswrapper[7480]: I0308 22:13:05.192339 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 22:13:05.195004 master-0 kubenswrapper[7480]: I0308 22:13:05.193004 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-jszd4" Mar 08 22:13:05.266103 master-0 kubenswrapper[7480]: I0308 22:13:05.263924 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 22:13:05.307626 master-0 kubenswrapper[7480]: I0308 22:13:05.307555 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.307867 master-0 kubenswrapper[7480]: I0308 22:13:05.307641 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.307867 master-0 kubenswrapper[7480]: I0308 22:13:05.307706 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.409474 master-0 kubenswrapper[7480]: I0308 22:13:05.409342 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.409474 master-0 kubenswrapper[7480]: I0308 22:13:05.409435 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.409729 master-0 kubenswrapper[7480]: I0308 22:13:05.409508 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.409729 master-0 kubenswrapper[7480]: I0308 22:13:05.409520 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.409729 master-0 kubenswrapper[7480]: I0308 22:13:05.409609 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.426136 master-0 kubenswrapper[7480]: I0308 22:13:05.425995 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.512364 master-0 kubenswrapper[7480]: I0308 22:13:05.512207 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:05.550896 master-0 kubenswrapper[7480]: I0308 22:13:05.550843 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7vnwn"] Mar 08 22:13:05.552410 master-0 kubenswrapper[7480]: I0308 22:13:05.552382 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.554923 master-0 kubenswrapper[7480]: I0308 22:13:05.554732 7480 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 08 22:13:05.554923 master-0 kubenswrapper[7480]: I0308 22:13:05.554739 7480 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-gnrft" Mar 08 22:13:05.614405 master-0 kubenswrapper[7480]: I0308 22:13:05.613984 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d17963f-5dc7-463e-8a72-6025e70a2144-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.614405 master-0 kubenswrapper[7480]: I0308 22:13:05.614068 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfld\" (UniqueName: \"kubernetes.io/projected/4d17963f-5dc7-463e-8a72-6025e70a2144-kube-api-access-bgfld\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.614405 master-0 kubenswrapper[7480]: I0308 22:13:05.614258 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d17963f-5dc7-463e-8a72-6025e70a2144-ready\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.614405 master-0 kubenswrapper[7480]: I0308 22:13:05.614351 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d17963f-5dc7-463e-8a72-6025e70a2144-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.716512 master-0 kubenswrapper[7480]: I0308 22:13:05.716438 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d17963f-5dc7-463e-8a72-6025e70a2144-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.716512 master-0 kubenswrapper[7480]: I0308 22:13:05.716524 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfld\" (UniqueName: \"kubernetes.io/projected/4d17963f-5dc7-463e-8a72-6025e70a2144-kube-api-access-bgfld\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.716831 master-0 kubenswrapper[7480]: I0308 22:13:05.716685 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d17963f-5dc7-463e-8a72-6025e70a2144-ready\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.716831 master-0 kubenswrapper[7480]: I0308 22:13:05.716721 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d17963f-5dc7-463e-8a72-6025e70a2144-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.716831 master-0 kubenswrapper[7480]: I0308 22:13:05.716737 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d17963f-5dc7-463e-8a72-6025e70a2144-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.717503 master-0 kubenswrapper[7480]: I0308 22:13:05.717475 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d17963f-5dc7-463e-8a72-6025e70a2144-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.717668 master-0 kubenswrapper[7480]: I0308 22:13:05.717630 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d17963f-5dc7-463e-8a72-6025e70a2144-ready\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.733778 master-0 kubenswrapper[7480]: I0308 22:13:05.733704 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfld\" (UniqueName: \"kubernetes.io/projected/4d17963f-5dc7-463e-8a72-6025e70a2144-kube-api-access-bgfld\") pod \"cni-sysctl-allowlist-ds-7vnwn\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.879766 master-0 kubenswrapper[7480]: I0308 22:13:05.879721 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:05.939721 master-0 kubenswrapper[7480]: I0308 22:13:05.939653 7480 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 22:13:05.953850 master-0 kubenswrapper[7480]: W0308 22:13:05.953767 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1d188983_1f19_4c8e_b604_034bd6308139.slice/crio-f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad WatchSource:0}: Error finding container f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad: Status 404 returned error can't find the container with id f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad Mar 08 22:13:06.044563 master-0 kubenswrapper[7480]: I0308 22:13:06.044497 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1d188983-1f19-4c8e-b604-034bd6308139","Type":"ContainerStarted","Data":"f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad"} Mar 08 22:13:06.055544 master-0 kubenswrapper[7480]: I0308 22:13:06.055455 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" event={"ID":"4d17963f-5dc7-463e-8a72-6025e70a2144","Type":"ContainerStarted","Data":"50d6b53d454870d697b9c573115c109e90d3f7b9c2856d48b483ff4f7d0df63f"} Mar 08 22:13:07.064231 master-0 kubenswrapper[7480]: I0308 22:13:07.064151 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1d188983-1f19-4c8e-b604-034bd6308139","Type":"ContainerStarted","Data":"457fd83835c6efbf11a60689076f6b36dc5b753b2b41e47858b503eb7cab62fc"} Mar 08 22:13:07.065860 master-0 kubenswrapper[7480]: I0308 22:13:07.065819 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" event={"ID":"4d17963f-5dc7-463e-8a72-6025e70a2144","Type":"ContainerStarted","Data":"f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575"} Mar 08 22:13:07.066112 master-0 kubenswrapper[7480]: I0308 22:13:07.066087 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:07.089346 master-0 kubenswrapper[7480]: I0308 22:13:07.089220 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.089193396 podStartE2EDuration="2.089193396s" podCreationTimestamp="2026-03-08 22:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:13:07.08481808 +0000 UTC m=+937.538438692" watchObservedRunningTime="2026-03-08 22:13:07.089193396 +0000 UTC m=+937.542814018" Mar 08 22:13:07.093921 master-0 kubenswrapper[7480]: I0308 22:13:07.093863 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:07.115425 master-0 kubenswrapper[7480]: I0308 22:13:07.115315 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" podStartSLOduration=2.115286997 podStartE2EDuration="2.115286997s" podCreationTimestamp="2026-03-08 22:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:13:07.112640601 +0000 UTC m=+937.566261253" watchObservedRunningTime="2026-03-08 22:13:07.115286997 +0000 UTC m=+937.568907599" Mar 08 22:13:10.661659 master-0 kubenswrapper[7480]: I0308 22:13:10.661615 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7vnwn"] Mar 08 22:13:10.662427 master-0 kubenswrapper[7480]: I0308 22:13:10.662399 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" gracePeriod=30 Mar 08 22:13:13.814211 master-0 kubenswrapper[7480]: I0308 22:13:13.811049 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 22:13:15.703427 master-0 kubenswrapper[7480]: I0308 22:13:15.703356 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-retry-1-master-0_17d5d8c1-55a9-484d-aca8-6563dfcd4e30/installer/0.log" Mar 08 22:13:15.704001 master-0 kubenswrapper[7480]: I0308 22:13:15.703472 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:13:15.744777 master-0 kubenswrapper[7480]: I0308 22:13:15.744679 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.744645188 podStartE2EDuration="2.744645188s" podCreationTimestamp="2026-03-08 22:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:13:15.741512787 +0000 UTC m=+946.195133399" watchObservedRunningTime="2026-03-08 22:13:15.744645188 +0000 UTC m=+946.198265800" Mar 08 22:13:15.785744 master-0 kubenswrapper[7480]: I0308 22:13:15.785041 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-var-lock\") pod \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " Mar 08 22:13:15.785744 master-0 kubenswrapper[7480]: I0308 22:13:15.785195 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kubelet-dir\") pod \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " Mar 08 22:13:15.785744 master-0 kubenswrapper[7480]: I0308 22:13:15.785180 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-var-lock" (OuterVolumeSpecName: "var-lock") pod "17d5d8c1-55a9-484d-aca8-6563dfcd4e30" (UID: "17d5d8c1-55a9-484d-aca8-6563dfcd4e30"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:15.785744 master-0 kubenswrapper[7480]: I0308 22:13:15.785317 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "17d5d8c1-55a9-484d-aca8-6563dfcd4e30" (UID: "17d5d8c1-55a9-484d-aca8-6563dfcd4e30"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:15.785744 master-0 kubenswrapper[7480]: I0308 22:13:15.785372 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kube-api-access\") pod \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\" (UID: \"17d5d8c1-55a9-484d-aca8-6563dfcd4e30\") " Mar 08 22:13:15.792354 master-0 kubenswrapper[7480]: I0308 22:13:15.789728 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:15.792354 master-0 kubenswrapper[7480]: I0308 22:13:15.789983 7480 scope.go:117] "RemoveContainer" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" Mar 08 22:13:15.792354 master-0 kubenswrapper[7480]: I0308 22:13:15.790205 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:15.793296 master-0 kubenswrapper[7480]: I0308 22:13:15.793265 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "17d5d8c1-55a9-484d-aca8-6563dfcd4e30" (UID: "17d5d8c1-55a9-484d-aca8-6563dfcd4e30"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:13:15.886168 master-0 kubenswrapper[7480]: E0308 22:13:15.886042 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:15.893008 master-0 kubenswrapper[7480]: E0308 22:13:15.892928 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:15.895283 master-0 kubenswrapper[7480]: I0308 22:13:15.895223 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d5d8c1-55a9-484d-aca8-6563dfcd4e30-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:15.897219 master-0 kubenswrapper[7480]: E0308 22:13:15.897151 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:15.897286 master-0 kubenswrapper[7480]: E0308 22:13:15.897231 7480 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" Mar 08 22:13:16.174589 master-0 kubenswrapper[7480]: I0308 22:13:16.174517 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/4.log" Mar 08 22:13:16.175311 master-0 kubenswrapper[7480]: I0308 22:13:16.175228 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"4737cebe7d8ef9fb43685e29dfbcfcf0ed12bbe9a9a485e2c6139850112daf4d"} Mar 08 22:13:16.177310 master-0 kubenswrapper[7480]: I0308 22:13:16.177269 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-retry-1-master-0_17d5d8c1-55a9-484d-aca8-6563dfcd4e30/installer/0.log" Mar 08 22:13:16.177448 master-0 kubenswrapper[7480]: I0308 22:13:16.177318 7480 generic.go:334] "Generic (PLEG): container finished" podID="17d5d8c1-55a9-484d-aca8-6563dfcd4e30" containerID="3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e" exitCode=1 Mar 08 22:13:16.177448 master-0 kubenswrapper[7480]: I0308 22:13:16.177353 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"17d5d8c1-55a9-484d-aca8-6563dfcd4e30","Type":"ContainerDied","Data":"3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e"} Mar 08 22:13:16.177448 master-0 kubenswrapper[7480]: I0308 22:13:16.177386 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"17d5d8c1-55a9-484d-aca8-6563dfcd4e30","Type":"ContainerDied","Data":"63db28b3ba13f9e9a35ba9cca7260ce6eca529ffba25a33a192bc8a1c9c0d6e8"} Mar 08 22:13:16.177448 master-0 kubenswrapper[7480]: I0308 22:13:16.177404 7480 scope.go:117] "RemoveContainer" containerID="3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e" Mar 08 22:13:16.177724 master-0 kubenswrapper[7480]: I0308 22:13:16.177506 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 08 22:13:16.208822 master-0 kubenswrapper[7480]: I0308 22:13:16.208761 7480 scope.go:117] "RemoveContainer" containerID="3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e" Mar 08 22:13:16.209596 master-0 kubenswrapper[7480]: E0308 22:13:16.209554 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e\": container with ID starting with 3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e not found: ID does not exist" containerID="3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e" Mar 08 22:13:16.209689 master-0 kubenswrapper[7480]: I0308 22:13:16.209609 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e"} err="failed to get container status \"3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e\": rpc error: code = NotFound desc = could not find container \"3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e\": container with ID starting with 3d962a2a1642dc189ec4ca9f70645ba6e206abcdbc7f53036013ea522b35f91e not found: ID does not exist" Mar 08 22:13:16.243400 master-0 kubenswrapper[7480]: I0308 22:13:16.243326 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 08 22:13:16.259620 master-0 kubenswrapper[7480]: I0308 22:13:16.259512 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 08 22:13:17.217315 master-0 kubenswrapper[7480]: I0308 22:13:17.217223 7480 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.217591 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" gracePeriod=30 Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.217690 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" containerID="cri-o://7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" gracePeriod=30 Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.217706 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" containerID="cri-o://f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" gracePeriod=30 Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.217817 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" gracePeriod=30 Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.218641 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: E0308 22:13:17.218927 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.218942 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: E0308 22:13:17.218961 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.218972 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: E0308 22:13:17.218983 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" Mar 08 22:13:17.218972 master-0 kubenswrapper[7480]: I0308 22:13:17.218993 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219006 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219015 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219028 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219038 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219264 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219273 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219320 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-cert-syncer" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219330 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-cert-syncer" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219351 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d5d8c1-55a9-484d-aca8-6563dfcd4e30" containerName="installer" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219361 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d5d8c1-55a9-484d-aca8-6563dfcd4e30" containerName="installer" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219374 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-recovery-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219383 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-recovery-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219545 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219568 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-recovery-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219613 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219638 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219651 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219674 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d5d8c1-55a9-484d-aca8-6563dfcd4e30" containerName="installer" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219694 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219708 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219722 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager-cert-syncer" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: E0308 22:13:17.219858 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.219868 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="kube-controller-manager" Mar 08 22:13:17.220671 master-0 kubenswrapper[7480]: I0308 22:13:17.220020 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd68ed75dc57765fa56dbf42c892ba9" containerName="cluster-policy-controller" Mar 08 22:13:17.318102 master-0 kubenswrapper[7480]: I0308 22:13:17.318007 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.318465 master-0 kubenswrapper[7480]: I0308 22:13:17.318382 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.405026 master-0 kubenswrapper[7480]: I0308 22:13:17.404959 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/3.log" Mar 08 22:13:17.407561 master-0 kubenswrapper[7480]: I0308 22:13:17.407453 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager-cert-syncer/0.log" Mar 08 22:13:17.408873 master-0 kubenswrapper[7480]: I0308 22:13:17.408796 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:13:17.409027 master-0 kubenswrapper[7480]: I0308 22:13:17.408975 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.420657 master-0 kubenswrapper[7480]: I0308 22:13:17.420595 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.420861 master-0 kubenswrapper[7480]: I0308 22:13:17.420676 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.420861 master-0 kubenswrapper[7480]: I0308 22:13:17.420790 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.420861 master-0 kubenswrapper[7480]: I0308 22:13:17.420832 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:17.426508 master-0 kubenswrapper[7480]: I0308 22:13:17.426422 7480 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5bd68ed75dc57765fa56dbf42c892ba9" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" Mar 08 22:13:17.523378 master-0 kubenswrapper[7480]: I0308 22:13:17.522361 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-cert-dir\") pod \"5bd68ed75dc57765fa56dbf42c892ba9\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " Mar 08 22:13:17.523378 master-0 kubenswrapper[7480]: I0308 22:13:17.522505 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-resource-dir\") pod \"5bd68ed75dc57765fa56dbf42c892ba9\" (UID: \"5bd68ed75dc57765fa56dbf42c892ba9\") " Mar 08 22:13:17.523378 master-0 kubenswrapper[7480]: I0308 22:13:17.522528 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "5bd68ed75dc57765fa56dbf42c892ba9" (UID: "5bd68ed75dc57765fa56dbf42c892ba9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:17.523378 master-0 kubenswrapper[7480]: I0308 22:13:17.522741 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5bd68ed75dc57765fa56dbf42c892ba9" (UID: "5bd68ed75dc57765fa56dbf42c892ba9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:17.523902 master-0 kubenswrapper[7480]: I0308 22:13:17.523499 7480 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:17.523902 master-0 kubenswrapper[7480]: I0308 22:13:17.523543 7480 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5bd68ed75dc57765fa56dbf42c892ba9-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:17.781103 master-0 kubenswrapper[7480]: I0308 22:13:17.780899 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:13:17.781346 master-0 kubenswrapper[7480]: E0308 22:13:17.781160 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:13:17.797249 master-0 kubenswrapper[7480]: I0308 22:13:17.797117 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d5d8c1-55a9-484d-aca8-6563dfcd4e30" path="/var/lib/kubelet/pods/17d5d8c1-55a9-484d-aca8-6563dfcd4e30/volumes" Mar 08 22:13:17.798027 master-0 kubenswrapper[7480]: I0308 22:13:17.797977 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd68ed75dc57765fa56dbf42c892ba9" path="/var/lib/kubelet/pods/5bd68ed75dc57765fa56dbf42c892ba9/volumes" Mar 08 22:13:18.214051 master-0 kubenswrapper[7480]: I0308 22:13:18.213946 7480 generic.go:334] "Generic (PLEG): container finished" podID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerID="5ccbb8ad117a453ccde6adce287311d7e602ee66003c156725015647e77006f5" exitCode=0 Mar 08 22:13:18.214602 master-0 kubenswrapper[7480]: I0308 22:13:18.214132 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f0e851e2-74fc-4f4c-b907-3c9158c59cd4","Type":"ContainerDied","Data":"5ccbb8ad117a453ccde6adce287311d7e602ee66003c156725015647e77006f5"} Mar 08 22:13:18.219975 master-0 kubenswrapper[7480]: I0308 22:13:18.219906 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/cluster-policy-controller/3.log" Mar 08 22:13:18.222648 master-0 kubenswrapper[7480]: I0308 22:13:18.222577 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager-cert-syncer/0.log" Mar 08 22:13:18.224874 master-0 kubenswrapper[7480]: I0308 22:13:18.224711 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5bd68ed75dc57765fa56dbf42c892ba9/kube-controller-manager/0.log" Mar 08 22:13:18.224874 master-0 kubenswrapper[7480]: I0308 22:13:18.224849 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" exitCode=0 Mar 08 22:13:18.225112 master-0 kubenswrapper[7480]: I0308 22:13:18.224884 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" exitCode=0 Mar 08 22:13:18.225112 master-0 kubenswrapper[7480]: I0308 22:13:18.224903 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" exitCode=0 Mar 08 22:13:18.225112 master-0 kubenswrapper[7480]: I0308 22:13:18.224919 7480 generic.go:334] "Generic (PLEG): container finished" podID="5bd68ed75dc57765fa56dbf42c892ba9" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" exitCode=2 Mar 08 22:13:18.225112 master-0 kubenswrapper[7480]: I0308 22:13:18.224990 7480 scope.go:117] "RemoveContainer" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" Mar 08 22:13:18.225112 master-0 kubenswrapper[7480]: I0308 22:13:18.225026 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:18.247191 master-0 kubenswrapper[7480]: I0308 22:13:18.247119 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:13:18.267298 master-0 kubenswrapper[7480]: I0308 22:13:18.265028 7480 scope.go:117] "RemoveContainer" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" Mar 08 22:13:18.279889 master-0 kubenswrapper[7480]: I0308 22:13:18.279781 7480 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5bd68ed75dc57765fa56dbf42c892ba9" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" Mar 08 22:13:18.300383 master-0 kubenswrapper[7480]: I0308 22:13:18.300323 7480 scope.go:117] "RemoveContainer" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" Mar 08 22:13:18.321721 master-0 kubenswrapper[7480]: I0308 22:13:18.321660 7480 scope.go:117] "RemoveContainer" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" Mar 08 22:13:18.365678 master-0 kubenswrapper[7480]: I0308 22:13:18.365607 7480 scope.go:117] "RemoveContainer" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:13:18.414401 master-0 kubenswrapper[7480]: I0308 22:13:18.414339 7480 scope.go:117] "RemoveContainer" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" Mar 08 22:13:18.418321 master-0 kubenswrapper[7480]: E0308 22:13:18.418257 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": container with ID starting with f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24 not found: ID does not exist" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" Mar 08 22:13:18.418411 master-0 kubenswrapper[7480]: I0308 22:13:18.418328 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24"} err="failed to get container status \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": rpc error: code = NotFound desc = could not find container \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": container with ID starting with f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24 not found: ID does not exist" Mar 08 22:13:18.418411 master-0 kubenswrapper[7480]: I0308 22:13:18.418367 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:13:18.422413 master-0 kubenswrapper[7480]: E0308 22:13:18.422203 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": container with ID starting with fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3 not found: ID does not exist" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:13:18.422413 master-0 kubenswrapper[7480]: I0308 22:13:18.422250 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} err="failed to get container status \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": rpc error: code = NotFound desc = could not find container \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": container with ID starting with fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3 not found: ID does not exist" Mar 08 22:13:18.422413 master-0 kubenswrapper[7480]: I0308 22:13:18.422279 7480 scope.go:117] "RemoveContainer" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" Mar 08 22:13:18.426474 master-0 kubenswrapper[7480]: E0308 22:13:18.426237 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": container with ID starting with 7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9 not found: ID does not exist" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" Mar 08 22:13:18.426474 master-0 kubenswrapper[7480]: I0308 22:13:18.426293 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9"} err="failed to get container status \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": rpc error: code = NotFound desc = could not find container \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": container with ID starting with 7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9 not found: ID does not exist" Mar 08 22:13:18.426474 master-0 kubenswrapper[7480]: I0308 22:13:18.426325 7480 scope.go:117] "RemoveContainer" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" Mar 08 22:13:18.429643 master-0 kubenswrapper[7480]: E0308 22:13:18.429583 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": container with ID starting with 466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c not found: ID does not exist" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" Mar 08 22:13:18.429643 master-0 kubenswrapper[7480]: I0308 22:13:18.429630 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c"} err="failed to get container status \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": rpc error: code = NotFound desc = could not find container \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": container with ID starting with 466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c not found: ID does not exist" Mar 08 22:13:18.429752 master-0 kubenswrapper[7480]: I0308 22:13:18.429658 7480 scope.go:117] "RemoveContainer" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: E0308 22:13:18.430372 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": container with ID starting with bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4 not found: ID does not exist" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.430415 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4"} err="failed to get container status \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": rpc error: code = NotFound desc = could not find container \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": container with ID starting with bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.430437 7480 scope.go:117] "RemoveContainer" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: E0308 22:13:18.430799 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": container with ID starting with f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243 not found: ID does not exist" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.430836 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243"} err="failed to get container status \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": rpc error: code = NotFound desc = could not find container \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": container with ID starting with f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.430855 7480 scope.go:117] "RemoveContainer" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.431337 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24"} err="failed to get container status \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": rpc error: code = NotFound desc = could not find container \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": container with ID starting with f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.431373 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.431624 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} err="failed to get container status \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": rpc error: code = NotFound desc = could not find container \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": container with ID starting with fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.431642 7480 scope.go:117] "RemoveContainer" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.432191 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9"} err="failed to get container status \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": rpc error: code = NotFound desc = could not find container \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": container with ID starting with 7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.432220 7480 scope.go:117] "RemoveContainer" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.432516 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c"} err="failed to get container status \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": rpc error: code = NotFound desc = could not find container \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": container with ID starting with 466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.432536 7480 scope.go:117] "RemoveContainer" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.432771 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4"} err="failed to get container status \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": rpc error: code = NotFound desc = could not find container \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": container with ID starting with bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.432789 7480 scope.go:117] "RemoveContainer" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433002 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243"} err="failed to get container status \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": rpc error: code = NotFound desc = could not find container \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": container with ID starting with f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433022 7480 scope.go:117] "RemoveContainer" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433229 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24"} err="failed to get container status \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": rpc error: code = NotFound desc = could not find container \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": container with ID starting with f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433248 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433510 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} err="failed to get container status \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": rpc error: code = NotFound desc = could not find container \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": container with ID starting with fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3 not found: ID does not exist" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433540 7480 scope.go:117] "RemoveContainer" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" Mar 08 22:13:18.433914 master-0 kubenswrapper[7480]: I0308 22:13:18.433917 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9"} err="failed to get container status \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": rpc error: code = NotFound desc = could not find container \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": container with ID starting with 7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9 not found: ID does not exist" Mar 08 22:13:18.434677 master-0 kubenswrapper[7480]: I0308 22:13:18.433964 7480 scope.go:117] "RemoveContainer" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" Mar 08 22:13:18.434677 master-0 kubenswrapper[7480]: I0308 22:13:18.434267 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c"} err="failed to get container status \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": rpc error: code = NotFound desc = could not find container \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": container with ID starting with 466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c not found: ID does not exist" Mar 08 22:13:18.434677 master-0 kubenswrapper[7480]: I0308 22:13:18.434289 7480 scope.go:117] "RemoveContainer" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" Mar 08 22:13:18.434677 master-0 kubenswrapper[7480]: I0308 22:13:18.434584 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4"} err="failed to get container status \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": rpc error: code = NotFound desc = could not find container \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": container with ID starting with bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4 not found: ID does not exist" Mar 08 22:13:18.434677 master-0 kubenswrapper[7480]: I0308 22:13:18.434642 7480 scope.go:117] "RemoveContainer" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.434932 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243"} err="failed to get container status \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": rpc error: code = NotFound desc = could not find container \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": container with ID starting with f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243 not found: ID does not exist" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.434964 7480 scope.go:117] "RemoveContainer" containerID="f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435211 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24"} err="failed to get container status \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": rpc error: code = NotFound desc = could not find container \"f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24\": container with ID starting with f5fa12a7a68e662a35951718d219d0ea3e85eb9ea964be86f85ab1b99955dc24 not found: ID does not exist" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435225 7480 scope.go:117] "RemoveContainer" containerID="fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435421 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3"} err="failed to get container status \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": rpc error: code = NotFound desc = could not find container \"fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3\": container with ID starting with fbcadd75d16f9e0084af3c56153c9af308d648eeb8bb4cf6d526641dcb4a37e3 not found: ID does not exist" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435444 7480 scope.go:117] "RemoveContainer" containerID="7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435607 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9"} err="failed to get container status \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": rpc error: code = NotFound desc = could not find container \"7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9\": container with ID starting with 7c9ea7ffc9e1743410533f8ed0b6e8efe3d3b131277609a5cc95cd8a8a1c70f9 not found: ID does not exist" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435622 7480 scope.go:117] "RemoveContainer" containerID="466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435806 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c"} err="failed to get container status \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": rpc error: code = NotFound desc = could not find container \"466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c\": container with ID starting with 466b16bcbc33a17d2866a8f410cba3d2344e644048440677e9911e5f8994442c not found: ID does not exist" Mar 08 22:13:18.436155 master-0 kubenswrapper[7480]: I0308 22:13:18.435837 7480 scope.go:117] "RemoveContainer" containerID="bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4" Mar 08 22:13:18.436792 master-0 kubenswrapper[7480]: I0308 22:13:18.436434 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4"} err="failed to get container status \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": rpc error: code = NotFound desc = could not find container \"bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4\": container with ID starting with bdac6cdbbd06980a741f7b7f070eac287d90292673acfd8284f379e07904e0d4 not found: ID does not exist" Mar 08 22:13:18.436792 master-0 kubenswrapper[7480]: I0308 22:13:18.436452 7480 scope.go:117] "RemoveContainer" containerID="f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243" Mar 08 22:13:18.436792 master-0 kubenswrapper[7480]: I0308 22:13:18.436711 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243"} err="failed to get container status \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": rpc error: code = NotFound desc = could not find container \"f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243\": container with ID starting with f6a438a833eacd73d743c77bfb467be55e7560ad44e884b3ffe14f62aaa3a243 not found: ID does not exist" Mar 08 22:13:19.571291 master-0 kubenswrapper[7480]: I0308 22:13:19.571259 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:13:19.666278 master-0 kubenswrapper[7480]: I0308 22:13:19.666197 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kube-api-access\") pod \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " Mar 08 22:13:19.666278 master-0 kubenswrapper[7480]: I0308 22:13:19.666298 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-var-lock\") pod \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " Mar 08 22:13:19.666683 master-0 kubenswrapper[7480]: I0308 22:13:19.666514 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kubelet-dir\") pod \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\" (UID: \"f0e851e2-74fc-4f4c-b907-3c9158c59cd4\") " Mar 08 22:13:19.666797 master-0 kubenswrapper[7480]: I0308 22:13:19.666733 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-var-lock" (OuterVolumeSpecName: "var-lock") pod "f0e851e2-74fc-4f4c-b907-3c9158c59cd4" (UID: "f0e851e2-74fc-4f4c-b907-3c9158c59cd4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:19.666889 master-0 kubenswrapper[7480]: I0308 22:13:19.666872 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f0e851e2-74fc-4f4c-b907-3c9158c59cd4" (UID: "f0e851e2-74fc-4f4c-b907-3c9158c59cd4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:19.670409 master-0 kubenswrapper[7480]: I0308 22:13:19.670361 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f0e851e2-74fc-4f4c-b907-3c9158c59cd4" (UID: "f0e851e2-74fc-4f4c-b907-3c9158c59cd4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:13:19.768619 master-0 kubenswrapper[7480]: I0308 22:13:19.768449 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:19.768619 master-0 kubenswrapper[7480]: I0308 22:13:19.768503 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:19.768619 master-0 kubenswrapper[7480]: I0308 22:13:19.768524 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0e851e2-74fc-4f4c-b907-3c9158c59cd4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:20.249147 master-0 kubenswrapper[7480]: I0308 22:13:20.248437 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f0e851e2-74fc-4f4c-b907-3c9158c59cd4","Type":"ContainerDied","Data":"7806b893b20c55d1f8afd2a7c71328b4f99e83bbf86148341ea260ee8e9271b9"} Mar 08 22:13:20.249147 master-0 kubenswrapper[7480]: I0308 22:13:20.248533 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7806b893b20c55d1f8afd2a7c71328b4f99e83bbf86148341ea260ee8e9271b9" Mar 08 22:13:20.249147 master-0 kubenswrapper[7480]: I0308 22:13:20.248549 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:13:25.291982 master-0 kubenswrapper[7480]: I0308 22:13:25.291794 7480 generic.go:334] "Generic (PLEG): container finished" podID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerID="b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7" exitCode=0 Mar 08 22:13:25.291982 master-0 kubenswrapper[7480]: I0308 22:13:25.291838 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerDied","Data":"b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7"} Mar 08 22:13:25.292760 master-0 kubenswrapper[7480]: I0308 22:13:25.292131 7480 scope.go:117] "RemoveContainer" containerID="a9ff593041cd55425d50bbaa4be87eabe25dc7300e7e43dd725623d6f81a484c" Mar 08 22:13:25.293448 master-0 kubenswrapper[7480]: I0308 22:13:25.293339 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"2afeed653a539a9642286d79c4ea18f7a0df39faf484b243e4c5081f2b8b2452"} Mar 08 22:13:25.500606 master-0 kubenswrapper[7480]: I0308 22:13:25.500474 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:25.500606 master-0 kubenswrapper[7480]: I0308 22:13:25.500567 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:25.505608 master-0 kubenswrapper[7480]: I0308 22:13:25.505534 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:25.505608 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:25.505608 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:25.505608 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:25.505980 master-0 kubenswrapper[7480]: I0308 22:13:25.505628 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:25.883438 master-0 kubenswrapper[7480]: E0308 22:13:25.883350 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:25.885813 master-0 kubenswrapper[7480]: E0308 22:13:25.885762 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:25.887821 master-0 kubenswrapper[7480]: E0308 22:13:25.887781 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:25.887911 master-0 kubenswrapper[7480]: E0308 22:13:25.887831 7480 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" Mar 08 22:13:26.505090 master-0 kubenswrapper[7480]: I0308 22:13:26.504959 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:26.505090 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:26.505090 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:26.505090 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:26.505785 master-0 kubenswrapper[7480]: I0308 22:13:26.505120 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:27.504121 master-0 kubenswrapper[7480]: I0308 22:13:27.504030 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:27.504121 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:27.504121 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:27.504121 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:27.504477 master-0 kubenswrapper[7480]: I0308 22:13:27.504190 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:27.780837 master-0 kubenswrapper[7480]: I0308 22:13:27.780658 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:27.805618 master-0 kubenswrapper[7480]: I0308 22:13:27.805533 7480 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ad1df1f4-3786-42d9-b8a6-70d91ff2819d" Mar 08 22:13:27.805618 master-0 kubenswrapper[7480]: I0308 22:13:27.805602 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ad1df1f4-3786-42d9-b8a6-70d91ff2819d" Mar 08 22:13:27.904319 master-0 kubenswrapper[7480]: I0308 22:13:27.904184 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:13:27.911650 master-0 kubenswrapper[7480]: I0308 22:13:27.911575 7480 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:27.917130 master-0 kubenswrapper[7480]: I0308 22:13:27.917007 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:13:28.036676 master-0 kubenswrapper[7480]: I0308 22:13:28.036552 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:28.052913 master-0 kubenswrapper[7480]: I0308 22:13:28.052820 7480 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:13:28.321060 master-0 kubenswrapper[7480]: I0308 22:13:28.320924 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"270111bd9a880fa859abff7a300a5a42546d0f86314f375208a892a811a648e7"} Mar 08 22:13:28.517909 master-0 kubenswrapper[7480]: I0308 22:13:28.517834 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:28.517909 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:28.517909 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:28.517909 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:28.518386 master-0 kubenswrapper[7480]: I0308 22:13:28.517930 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:29.335126 master-0 kubenswrapper[7480]: I0308 22:13:29.335010 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"15c38815310dffefa782d7e3b86b468eadf91008125f12d833ccabdf6a47990b"} Mar 08 22:13:29.335729 master-0 kubenswrapper[7480]: I0308 22:13:29.335125 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"24468252b1016ecbfc6fabcc842f03b85cc1d8d62ad0492983e2d43991a2cb4a"} Mar 08 22:13:29.335729 master-0 kubenswrapper[7480]: I0308 22:13:29.335176 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"045d96fc5260120205fd3f9cca2039678cbcc24c6c931c6bbf3f1ba418756e6c"} Mar 08 22:13:29.505168 master-0 kubenswrapper[7480]: I0308 22:13:29.505080 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:29.505168 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:29.505168 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:29.505168 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:29.505632 master-0 kubenswrapper[7480]: I0308 22:13:29.505195 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:30.350556 master-0 kubenswrapper[7480]: I0308 22:13:30.350455 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"713d5bb870be4b517e2a3b6934cbc3a8dbb4fb996bc551e64dbb0c038eff7f98"} Mar 08 22:13:30.393508 master-0 kubenswrapper[7480]: I0308 22:13:30.393385 7480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.3933560959999998 podStartE2EDuration="3.393356096s" podCreationTimestamp="2026-03-08 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:13:30.39004476 +0000 UTC m=+960.843665382" watchObservedRunningTime="2026-03-08 22:13:30.393356096 +0000 UTC m=+960.846976698" Mar 08 22:13:30.506730 master-0 kubenswrapper[7480]: I0308 22:13:30.506640 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:30.506730 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:30.506730 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:30.506730 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:30.506730 master-0 kubenswrapper[7480]: I0308 22:13:30.506724 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:30.781724 master-0 kubenswrapper[7480]: I0308 22:13:30.781511 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:13:30.782061 master-0 kubenswrapper[7480]: E0308 22:13:30.781804 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:13:31.503284 master-0 kubenswrapper[7480]: I0308 22:13:31.503135 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:31.503284 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:31.503284 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:31.503284 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:31.503284 master-0 kubenswrapper[7480]: I0308 22:13:31.503270 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:32.505503 master-0 kubenswrapper[7480]: I0308 22:13:32.505403 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:32.505503 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:32.505503 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:32.505503 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:32.506373 master-0 kubenswrapper[7480]: I0308 22:13:32.505541 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:33.503366 master-0 kubenswrapper[7480]: I0308 22:13:33.503265 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:33.503366 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:33.503366 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:33.503366 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:33.503864 master-0 kubenswrapper[7480]: I0308 22:13:33.503388 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:34.504395 master-0 kubenswrapper[7480]: I0308 22:13:34.504191 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:34.504395 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:34.504395 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:34.504395 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:34.505453 master-0 kubenswrapper[7480]: I0308 22:13:34.504413 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:35.505240 master-0 kubenswrapper[7480]: I0308 22:13:35.504830 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:35.505240 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:35.505240 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:35.505240 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:35.505240 master-0 kubenswrapper[7480]: I0308 22:13:35.504945 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:35.884040 master-0 kubenswrapper[7480]: E0308 22:13:35.883901 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:35.887165 master-0 kubenswrapper[7480]: E0308 22:13:35.887047 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:35.889569 master-0 kubenswrapper[7480]: E0308 22:13:35.889499 7480 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 22:13:35.889695 master-0 kubenswrapper[7480]: E0308 22:13:35.889575 7480 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" Mar 08 22:13:36.503230 master-0 kubenswrapper[7480]: I0308 22:13:36.503129 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:36.503230 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:36.503230 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:36.503230 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:36.503734 master-0 kubenswrapper[7480]: I0308 22:13:36.503244 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:37.504702 master-0 kubenswrapper[7480]: I0308 22:13:37.504611 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:37.504702 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:37.504702 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:37.504702 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:37.505458 master-0 kubenswrapper[7480]: I0308 22:13:37.505168 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:38.037707 master-0 kubenswrapper[7480]: I0308 22:13:38.037615 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.037707 master-0 kubenswrapper[7480]: I0308 22:13:38.037699 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.037707 master-0 kubenswrapper[7480]: I0308 22:13:38.037713 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.037707 master-0 kubenswrapper[7480]: I0308 22:13:38.037724 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.046090 master-0 kubenswrapper[7480]: I0308 22:13:38.046021 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.047041 master-0 kubenswrapper[7480]: I0308 22:13:38.046950 7480 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.448266 master-0 kubenswrapper[7480]: I0308 22:13:38.447693 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.449779 master-0 kubenswrapper[7480]: I0308 22:13:38.449713 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:38.504933 master-0 kubenswrapper[7480]: I0308 22:13:38.504852 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:38.504933 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:38.504933 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:38.504933 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:38.505629 master-0 kubenswrapper[7480]: I0308 22:13:38.504944 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:39.505581 master-0 kubenswrapper[7480]: I0308 22:13:39.505468 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:39.505581 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:39.505581 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:39.505581 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:39.505581 master-0 kubenswrapper[7480]: I0308 22:13:39.505590 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:40.503199 master-0 kubenswrapper[7480]: I0308 22:13:40.503137 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:40.503199 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:40.503199 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:40.503199 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:40.503701 master-0 kubenswrapper[7480]: I0308 22:13:40.503236 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:40.800921 master-0 kubenswrapper[7480]: I0308 22:13:40.800852 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7vnwn_4d17963f-5dc7-463e-8a72-6025e70a2144/kube-multus-additional-cni-plugins/0.log" Mar 08 22:13:40.802235 master-0 kubenswrapper[7480]: I0308 22:13:40.800974 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:40.894639 master-0 kubenswrapper[7480]: I0308 22:13:40.894517 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d17963f-5dc7-463e-8a72-6025e70a2144-tuning-conf-dir\") pod \"4d17963f-5dc7-463e-8a72-6025e70a2144\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " Mar 08 22:13:40.895854 master-0 kubenswrapper[7480]: I0308 22:13:40.894714 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgfld\" (UniqueName: \"kubernetes.io/projected/4d17963f-5dc7-463e-8a72-6025e70a2144-kube-api-access-bgfld\") pod \"4d17963f-5dc7-463e-8a72-6025e70a2144\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " Mar 08 22:13:40.896060 master-0 kubenswrapper[7480]: I0308 22:13:40.894763 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d17963f-5dc7-463e-8a72-6025e70a2144-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "4d17963f-5dc7-463e-8a72-6025e70a2144" (UID: "4d17963f-5dc7-463e-8a72-6025e70a2144"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:40.896784 master-0 kubenswrapper[7480]: I0308 22:13:40.896729 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d17963f-5dc7-463e-8a72-6025e70a2144-cni-sysctl-allowlist\") pod \"4d17963f-5dc7-463e-8a72-6025e70a2144\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " Mar 08 22:13:40.897057 master-0 kubenswrapper[7480]: I0308 22:13:40.897027 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d17963f-5dc7-463e-8a72-6025e70a2144-ready\") pod \"4d17963f-5dc7-463e-8a72-6025e70a2144\" (UID: \"4d17963f-5dc7-463e-8a72-6025e70a2144\") " Mar 08 22:13:40.897671 master-0 kubenswrapper[7480]: I0308 22:13:40.897638 7480 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d17963f-5dc7-463e-8a72-6025e70a2144-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:40.898911 master-0 kubenswrapper[7480]: I0308 22:13:40.898812 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d17963f-5dc7-463e-8a72-6025e70a2144-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "4d17963f-5dc7-463e-8a72-6025e70a2144" (UID: "4d17963f-5dc7-463e-8a72-6025e70a2144"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:13:40.899260 master-0 kubenswrapper[7480]: I0308 22:13:40.899138 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d17963f-5dc7-463e-8a72-6025e70a2144-ready" (OuterVolumeSpecName: "ready") pod "4d17963f-5dc7-463e-8a72-6025e70a2144" (UID: "4d17963f-5dc7-463e-8a72-6025e70a2144"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:13:40.899649 master-0 kubenswrapper[7480]: I0308 22:13:40.899560 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d17963f-5dc7-463e-8a72-6025e70a2144-kube-api-access-bgfld" (OuterVolumeSpecName: "kube-api-access-bgfld") pod "4d17963f-5dc7-463e-8a72-6025e70a2144" (UID: "4d17963f-5dc7-463e-8a72-6025e70a2144"). InnerVolumeSpecName "kube-api-access-bgfld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:13:41.000316 master-0 kubenswrapper[7480]: I0308 22:13:41.000210 7480 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d17963f-5dc7-463e-8a72-6025e70a2144-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:41.000316 master-0 kubenswrapper[7480]: I0308 22:13:41.000280 7480 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d17963f-5dc7-463e-8a72-6025e70a2144-ready\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:41.000316 master-0 kubenswrapper[7480]: I0308 22:13:41.000311 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgfld\" (UniqueName: \"kubernetes.io/projected/4d17963f-5dc7-463e-8a72-6025e70a2144-kube-api-access-bgfld\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:41.468966 master-0 kubenswrapper[7480]: I0308 22:13:41.468889 7480 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7vnwn_4d17963f-5dc7-463e-8a72-6025e70a2144/kube-multus-additional-cni-plugins/0.log" Mar 08 22:13:41.468966 master-0 kubenswrapper[7480]: I0308 22:13:41.468959 7480 generic.go:334] "Generic (PLEG): container finished" podID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" exitCode=137 Mar 08 22:13:41.469482 master-0 kubenswrapper[7480]: I0308 22:13:41.469000 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" event={"ID":"4d17963f-5dc7-463e-8a72-6025e70a2144","Type":"ContainerDied","Data":"f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575"} Mar 08 22:13:41.469482 master-0 kubenswrapper[7480]: I0308 22:13:41.469037 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" event={"ID":"4d17963f-5dc7-463e-8a72-6025e70a2144","Type":"ContainerDied","Data":"50d6b53d454870d697b9c573115c109e90d3f7b9c2856d48b483ff4f7d0df63f"} Mar 08 22:13:41.469482 master-0 kubenswrapper[7480]: I0308 22:13:41.469058 7480 scope.go:117] "RemoveContainer" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" Mar 08 22:13:41.469482 master-0 kubenswrapper[7480]: I0308 22:13:41.469228 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7vnwn" Mar 08 22:13:41.505033 master-0 kubenswrapper[7480]: I0308 22:13:41.504928 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:41.505033 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:41.505033 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:41.505033 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:41.505033 master-0 kubenswrapper[7480]: I0308 22:13:41.505014 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:41.510538 master-0 kubenswrapper[7480]: I0308 22:13:41.510493 7480 scope.go:117] "RemoveContainer" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" Mar 08 22:13:41.511152 master-0 kubenswrapper[7480]: I0308 22:13:41.511113 7480 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7vnwn"] Mar 08 22:13:41.511254 master-0 kubenswrapper[7480]: E0308 22:13:41.511184 7480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575\": container with ID starting with f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575 not found: ID does not exist" containerID="f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575" Mar 08 22:13:41.511254 master-0 kubenswrapper[7480]: I0308 22:13:41.511227 7480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575"} err="failed to get container status \"f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575\": rpc error: code = NotFound desc = could not find container \"f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575\": container with ID starting with f69e8d74039025bc6d1d82ba3b1c35b7b074f10a982a5206f620aa03cd7f1575 not found: ID does not exist" Mar 08 22:13:41.515031 master-0 kubenswrapper[7480]: I0308 22:13:41.514955 7480 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7vnwn"] Mar 08 22:13:41.798509 master-0 kubenswrapper[7480]: I0308 22:13:41.798339 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" path="/var/lib/kubelet/pods/4d17963f-5dc7-463e-8a72-6025e70a2144/volumes" Mar 08 22:13:42.504986 master-0 kubenswrapper[7480]: I0308 22:13:42.504848 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:42.504986 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:42.504986 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:42.504986 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:42.506252 master-0 kubenswrapper[7480]: I0308 22:13:42.504992 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:42.782711 master-0 kubenswrapper[7480]: I0308 22:13:42.782478 7480 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:13:42.783109 master-0 kubenswrapper[7480]: E0308 22:13:42.782785 7480 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-wklhr_openshift-cluster-storage-operator(c901b468-b8e9-48f8-8050-0d54e24e2adb)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" podUID="c901b468-b8e9-48f8-8050-0d54e24e2adb" Mar 08 22:13:43.503628 master-0 kubenswrapper[7480]: I0308 22:13:43.503537 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:43.503628 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:43.503628 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:43.503628 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:43.504484 master-0 kubenswrapper[7480]: I0308 22:13:43.503663 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:44.253029 master-0 kubenswrapper[7480]: I0308 22:13:44.252942 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:13:44.253841 master-0 kubenswrapper[7480]: E0308 22:13:44.253504 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerName="installer" Mar 08 22:13:44.253841 master-0 kubenswrapper[7480]: I0308 22:13:44.253526 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerName="installer" Mar 08 22:13:44.253841 master-0 kubenswrapper[7480]: E0308 22:13:44.253550 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" Mar 08 22:13:44.253841 master-0 kubenswrapper[7480]: I0308 22:13:44.253560 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" Mar 08 22:13:44.253841 master-0 kubenswrapper[7480]: I0308 22:13:44.253755 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerName="installer" Mar 08 22:13:44.253841 master-0 kubenswrapper[7480]: I0308 22:13:44.253779 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d17963f-5dc7-463e-8a72-6025e70a2144" containerName="kube-multus-additional-cni-plugins" Mar 08 22:13:44.254408 master-0 kubenswrapper[7480]: I0308 22:13:44.254373 7480 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 22:13:44.254653 master-0 kubenswrapper[7480]: I0308 22:13:44.254606 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.254776 master-0 kubenswrapper[7480]: I0308 22:13:44.254721 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://8d8ef0d2f7570923c4fa1a9617292413de2da9937c525cc65b8fbe3433d3ca3e" gracePeriod=15 Mar 08 22:13:44.255005 master-0 kubenswrapper[7480]: I0308 22:13:44.254872 7480 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://81880effd0e6f8229eefecfa74f76d169bbd4c02b4efe891a8b85181d0ccd2ca" gracePeriod=15 Mar 08 22:13:44.256163 master-0 kubenswrapper[7480]: I0308 22:13:44.255621 7480 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 22:13:44.256163 master-0 kubenswrapper[7480]: E0308 22:13:44.255958 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 22:13:44.256163 master-0 kubenswrapper[7480]: I0308 22:13:44.255977 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 22:13:44.256163 master-0 kubenswrapper[7480]: E0308 22:13:44.256145 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 22:13:44.256494 master-0 kubenswrapper[7480]: I0308 22:13:44.256160 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 22:13:44.256494 master-0 kubenswrapper[7480]: E0308 22:13:44.256286 7480 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 22:13:44.256494 master-0 kubenswrapper[7480]: I0308 22:13:44.256297 7480 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 22:13:44.256703 master-0 kubenswrapper[7480]: I0308 22:13:44.256584 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 22:13:44.256703 master-0 kubenswrapper[7480]: I0308 22:13:44.256630 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 22:13:44.256703 master-0 kubenswrapper[7480]: I0308 22:13:44.256655 7480 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 22:13:44.261017 master-0 kubenswrapper[7480]: I0308 22:13:44.259676 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.327738 master-0 kubenswrapper[7480]: E0308 22:13:44.327650 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.348879 master-0 kubenswrapper[7480]: E0308 22:13:44.348799 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.380522 master-0 kubenswrapper[7480]: I0308 22:13:44.380478 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.380643 master-0 kubenswrapper[7480]: I0308 22:13:44.380543 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.380643 master-0 kubenswrapper[7480]: I0308 22:13:44.380581 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.380643 master-0 kubenswrapper[7480]: I0308 22:13:44.380608 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.380743 master-0 kubenswrapper[7480]: I0308 22:13:44.380653 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.380743 master-0 kubenswrapper[7480]: I0308 22:13:44.380703 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.380743 master-0 kubenswrapper[7480]: I0308 22:13:44.380730 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.380904 master-0 kubenswrapper[7480]: I0308 22:13:44.380853 7480 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483109 master-0 kubenswrapper[7480]: I0308 22:13:44.483006 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.483109 master-0 kubenswrapper[7480]: I0308 22:13:44.483116 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.483478 master-0 kubenswrapper[7480]: I0308 22:13:44.483290 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.483478 master-0 kubenswrapper[7480]: I0308 22:13:44.483324 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483478 master-0 kubenswrapper[7480]: I0308 22:13:44.483401 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483579 master-0 kubenswrapper[7480]: I0308 22:13:44.483496 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.483579 master-0 kubenswrapper[7480]: I0308 22:13:44.483533 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483643 master-0 kubenswrapper[7480]: I0308 22:13:44.483586 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483712 master-0 kubenswrapper[7480]: I0308 22:13:44.483661 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483775 master-0 kubenswrapper[7480]: I0308 22:13:44.483748 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.483848 master-0 kubenswrapper[7480]: I0308 22:13:44.483812 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483886 master-0 kubenswrapper[7480]: I0308 22:13:44.483868 7480 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483921 master-0 kubenswrapper[7480]: I0308 22:13:44.483873 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.483921 master-0 kubenswrapper[7480]: I0308 22:13:44.483899 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.484037 master-0 kubenswrapper[7480]: I0308 22:13:44.483997 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.484119 master-0 kubenswrapper[7480]: I0308 22:13:44.484054 7480 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.503367 master-0 kubenswrapper[7480]: I0308 22:13:44.503276 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:44.503367 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:44.503367 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:44.503367 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:44.503595 master-0 kubenswrapper[7480]: I0308 22:13:44.503370 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:44.504926 master-0 kubenswrapper[7480]: I0308 22:13:44.504881 7480 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="81880effd0e6f8229eefecfa74f76d169bbd4c02b4efe891a8b85181d0ccd2ca" exitCode=0 Mar 08 22:13:44.507624 master-0 kubenswrapper[7480]: I0308 22:13:44.507582 7480 generic.go:334] "Generic (PLEG): container finished" podID="1d188983-1f19-4c8e-b604-034bd6308139" containerID="457fd83835c6efbf11a60689076f6b36dc5b753b2b41e47858b503eb7cab62fc" exitCode=0 Mar 08 22:13:44.507755 master-0 kubenswrapper[7480]: I0308 22:13:44.507662 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1d188983-1f19-4c8e-b604-034bd6308139","Type":"ContainerDied","Data":"457fd83835c6efbf11a60689076f6b36dc5b753b2b41e47858b503eb7cab62fc"} Mar 08 22:13:44.509539 master-0 kubenswrapper[7480]: I0308 22:13:44.509456 7480 status_manager.go:851] "Failed to get status for pod" podUID="1d188983-1f19-4c8e-b604-034bd6308139" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:13:44.629378 master-0 kubenswrapper[7480]: I0308 22:13:44.629284 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:44.650351 master-0 kubenswrapper[7480]: I0308 22:13:44.650264 7480 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:44.663705 master-0 kubenswrapper[7480]: W0308 22:13:44.663604 7480 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacbb43bf2cf27ed60d1f635fd6638ac7.slice/crio-7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f WatchSource:0}: Error finding container 7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f: Status 404 returned error can't find the container with id 7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f Mar 08 22:13:44.670919 master-0 kubenswrapper[7480]: E0308 22:13:44.670719 7480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189afd67965684eb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:acbb43bf2cf27ed60d1f635fd6638ac7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:13:44.669422827 +0000 UTC m=+975.123043429,LastTimestamp:2026-03-08 22:13:44.669422827 +0000 UTC m=+975.123043429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:13:45.503871 master-0 kubenswrapper[7480]: I0308 22:13:45.503747 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:45.503871 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:45.503871 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:45.503871 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:45.505378 master-0 kubenswrapper[7480]: I0308 22:13:45.503886 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:45.520901 master-0 kubenswrapper[7480]: I0308 22:13:45.520820 7480 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051" exitCode=0 Mar 08 22:13:45.521367 master-0 kubenswrapper[7480]: I0308 22:13:45.520932 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerDied","Data":"d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051"} Mar 08 22:13:45.521367 master-0 kubenswrapper[7480]: I0308 22:13:45.520983 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"318c84ebaf730c7c85b63db579f8af63f5545b50f015236d0cbd1a16b9495c4d"} Mar 08 22:13:45.523047 master-0 kubenswrapper[7480]: I0308 22:13:45.522987 7480 status_manager.go:851] "Failed to get status for pod" podUID="1d188983-1f19-4c8e-b604-034bd6308139" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:13:45.523559 master-0 kubenswrapper[7480]: E0308 22:13:45.523136 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:45.526645 master-0 kubenswrapper[7480]: I0308 22:13:45.526581 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"acbb43bf2cf27ed60d1f635fd6638ac7","Type":"ContainerStarted","Data":"fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695"} Mar 08 22:13:45.526770 master-0 kubenswrapper[7480]: I0308 22:13:45.526673 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"acbb43bf2cf27ed60d1f635fd6638ac7","Type":"ContainerStarted","Data":"7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f"} Mar 08 22:13:45.529582 master-0 kubenswrapper[7480]: I0308 22:13:45.528129 7480 status_manager.go:851] "Failed to get status for pod" podUID="1d188983-1f19-4c8e-b604-034bd6308139" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:13:45.529582 master-0 kubenswrapper[7480]: E0308 22:13:45.528303 7480 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:45.874607 master-0 kubenswrapper[7480]: I0308 22:13:45.874566 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:45.876510 master-0 kubenswrapper[7480]: I0308 22:13:45.876315 7480 status_manager.go:851] "Failed to get status for pod" podUID="1d188983-1f19-4c8e-b604-034bd6308139" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:13:46.011924 master-0 kubenswrapper[7480]: I0308 22:13:46.011338 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"1d188983-1f19-4c8e-b604-034bd6308139\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " Mar 08 22:13:46.011924 master-0 kubenswrapper[7480]: I0308 22:13:46.011556 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"1d188983-1f19-4c8e-b604-034bd6308139\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " Mar 08 22:13:46.011924 master-0 kubenswrapper[7480]: I0308 22:13:46.011703 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"1d188983-1f19-4c8e-b604-034bd6308139\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " Mar 08 22:13:46.011924 master-0 kubenswrapper[7480]: I0308 22:13:46.011865 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock" (OuterVolumeSpecName: "var-lock") pod "1d188983-1f19-4c8e-b604-034bd6308139" (UID: "1d188983-1f19-4c8e-b604-034bd6308139"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:46.012560 master-0 kubenswrapper[7480]: I0308 22:13:46.012087 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1d188983-1f19-4c8e-b604-034bd6308139" (UID: "1d188983-1f19-4c8e-b604-034bd6308139"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:46.012894 master-0 kubenswrapper[7480]: I0308 22:13:46.012813 7480 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:46.012894 master-0 kubenswrapper[7480]: I0308 22:13:46.012855 7480 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:46.016642 master-0 kubenswrapper[7480]: I0308 22:13:46.016581 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1d188983-1f19-4c8e-b604-034bd6308139" (UID: "1d188983-1f19-4c8e-b604-034bd6308139"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:13:46.114397 master-0 kubenswrapper[7480]: I0308 22:13:46.114252 7480 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:46.503806 master-0 kubenswrapper[7480]: I0308 22:13:46.503734 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:46.503806 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:46.503806 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:46.503806 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:46.504662 master-0 kubenswrapper[7480]: I0308 22:13:46.503828 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:46.559777 master-0 kubenswrapper[7480]: I0308 22:13:46.559713 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:46.560304 master-0 kubenswrapper[7480]: I0308 22:13:46.560184 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1d188983-1f19-4c8e-b604-034bd6308139","Type":"ContainerDied","Data":"f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad"} Mar 08 22:13:46.560388 master-0 kubenswrapper[7480]: I0308 22:13:46.560306 7480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad" Mar 08 22:13:46.562282 master-0 kubenswrapper[7480]: I0308 22:13:46.562234 7480 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="8d8ef0d2f7570923c4fa1a9617292413de2da9937c525cc65b8fbe3433d3ca3e" exitCode=0 Mar 08 22:13:46.568481 master-0 kubenswrapper[7480]: I0308 22:13:46.567259 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d"} Mar 08 22:13:46.568481 master-0 kubenswrapper[7480]: I0308 22:13:46.567322 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd"} Mar 08 22:13:47.015033 master-0 kubenswrapper[7480]: I0308 22:13:47.014985 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 22:13:47.131679 master-0 kubenswrapper[7480]: I0308 22:13:47.131608 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 08 22:13:47.131826 master-0 kubenswrapper[7480]: I0308 22:13:47.131717 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 08 22:13:47.131826 master-0 kubenswrapper[7480]: I0308 22:13:47.131769 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 08 22:13:47.131826 master-0 kubenswrapper[7480]: I0308 22:13:47.131798 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 08 22:13:47.132105 master-0 kubenswrapper[7480]: I0308 22:13:47.131885 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 08 22:13:47.132105 master-0 kubenswrapper[7480]: I0308 22:13:47.131929 7480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 08 22:13:47.132396 master-0 kubenswrapper[7480]: I0308 22:13:47.132356 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:47.132396 master-0 kubenswrapper[7480]: I0308 22:13:47.132394 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:47.132546 master-0 kubenswrapper[7480]: I0308 22:13:47.132414 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:47.132546 master-0 kubenswrapper[7480]: I0308 22:13:47.132468 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:47.132546 master-0 kubenswrapper[7480]: I0308 22:13:47.132483 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:47.132546 master-0 kubenswrapper[7480]: I0308 22:13:47.132498 7480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:47.236109 master-0 kubenswrapper[7480]: I0308 22:13:47.235486 7480 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:47.236109 master-0 kubenswrapper[7480]: I0308 22:13:47.235598 7480 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:47.236109 master-0 kubenswrapper[7480]: I0308 22:13:47.235623 7480 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:47.236109 master-0 kubenswrapper[7480]: I0308 22:13:47.235646 7480 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:47.236109 master-0 kubenswrapper[7480]: I0308 22:13:47.235662 7480 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:47.236109 master-0 kubenswrapper[7480]: I0308 22:13:47.235677 7480 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:47.511102 master-0 kubenswrapper[7480]: I0308 22:13:47.509523 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:47.511102 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:47.511102 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:47.511102 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:47.511102 master-0 kubenswrapper[7480]: I0308 22:13:47.509592 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:47.669154 master-0 kubenswrapper[7480]: I0308 22:13:47.668568 7480 scope.go:117] "RemoveContainer" containerID="81880effd0e6f8229eefecfa74f76d169bbd4c02b4efe891a8b85181d0ccd2ca" Mar 08 22:13:47.669154 master-0 kubenswrapper[7480]: I0308 22:13:47.668769 7480 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 22:13:47.707656 master-0 kubenswrapper[7480]: I0308 22:13:47.706880 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057"} Mar 08 22:13:47.707656 master-0 kubenswrapper[7480]: I0308 22:13:47.706936 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7"} Mar 08 22:13:47.707656 master-0 kubenswrapper[7480]: I0308 22:13:47.706949 7480 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4"} Mar 08 22:13:47.707656 master-0 kubenswrapper[7480]: I0308 22:13:47.707292 7480 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:47.728262 master-0 kubenswrapper[7480]: I0308 22:13:47.727758 7480 scope.go:117] "RemoveContainer" containerID="8d8ef0d2f7570923c4fa1a9617292413de2da9937c525cc65b8fbe3433d3ca3e" Mar 08 22:13:47.780770 master-0 kubenswrapper[7480]: I0308 22:13:47.780717 7480 scope.go:117] "RemoveContainer" containerID="da776c7c3ffac41c9193152c13ad24a2c2d14135225b75898e7c53fb459df62b" Mar 08 22:13:47.818193 master-0 kubenswrapper[7480]: I0308 22:13:47.818140 7480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 08 22:13:47.819058 master-0 kubenswrapper[7480]: I0308 22:13:47.818833 7480 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 08 22:13:48.504066 master-0 kubenswrapper[7480]: I0308 22:13:48.503975 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:48.504066 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:48.504066 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:48.504066 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:48.504631 master-0 kubenswrapper[7480]: I0308 22:13:48.504126 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:49.509506 master-0 kubenswrapper[7480]: I0308 22:13:49.508423 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:49.509506 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:49.509506 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:49.509506 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:49.509506 master-0 kubenswrapper[7480]: I0308 22:13:49.508484 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:50.505597 master-0 kubenswrapper[7480]: I0308 22:13:50.504981 7480 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-4fsdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 22:13:50.505597 master-0 kubenswrapper[7480]: [-]has-synced failed: reason withheld Mar 08 22:13:50.505597 master-0 kubenswrapper[7480]: [+]process-running ok Mar 08 22:13:50.505597 master-0 kubenswrapper[7480]: healthz check failed Mar 08 22:13:50.505597 master-0 kubenswrapper[7480]: I0308 22:13:50.505128 7480 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" podUID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 22:13:50.610634 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 08 22:13:50.628212 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 08 22:13:50.628496 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 08 22:13:50.629575 master-0 systemd[1]: kubelet.service: Consumed 2min 44.628s CPU time. Mar 08 22:13:50.646288 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 08 22:13:50.775146 master-0 kubenswrapper[29458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 22:13:50.775146 master-0 kubenswrapper[29458]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 22:13:50.775146 master-0 kubenswrapper[29458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 22:13:50.775146 master-0 kubenswrapper[29458]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 22:13:50.775146 master-0 kubenswrapper[29458]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 22:13:50.775146 master-0 kubenswrapper[29458]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 22:13:50.775970 master-0 kubenswrapper[29458]: I0308 22:13:50.775172 29458 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 22:13:50.778491 master-0 kubenswrapper[29458]: W0308 22:13:50.778459 29458 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 22:13:50.778491 master-0 kubenswrapper[29458]: W0308 22:13:50.778485 29458 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 22:13:50.778491 master-0 kubenswrapper[29458]: W0308 22:13:50.778491 29458 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 22:13:50.778491 master-0 kubenswrapper[29458]: W0308 22:13:50.778496 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778501 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778505 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778509 29458 feature_gate.go:330] unrecognized feature gate: Example Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778514 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778518 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778523 29458 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778526 29458 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778530 29458 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778534 29458 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778538 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778542 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778546 29458 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778549 29458 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778553 29458 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778564 29458 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778568 29458 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778572 29458 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778576 29458 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778579 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 22:13:50.778683 master-0 kubenswrapper[29458]: W0308 22:13:50.778584 29458 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778588 29458 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778592 29458 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778595 29458 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778599 29458 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778603 29458 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778607 29458 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778613 29458 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778618 29458 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778622 29458 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778626 29458 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778630 29458 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778633 29458 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778637 29458 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778641 29458 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778645 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778649 29458 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778652 29458 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778656 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778660 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 22:13:50.779574 master-0 kubenswrapper[29458]: W0308 22:13:50.778663 29458 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778667 29458 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778671 29458 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778674 29458 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778678 29458 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778681 29458 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778685 29458 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778689 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778694 29458 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778700 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778704 29458 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778709 29458 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778714 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778718 29458 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778722 29458 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778726 29458 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778730 29458 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778733 29458 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778742 29458 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 22:13:50.781023 master-0 kubenswrapper[29458]: W0308 22:13:50.778746 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778750 29458 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778753 29458 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778757 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778761 29458 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778764 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778768 29458 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778772 29458 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778777 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: W0308 22:13:50.778781 29458 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778872 29458 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778883 29458 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778891 29458 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778896 29458 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778902 29458 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778907 29458 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778913 29458 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778919 29458 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778924 29458 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778928 29458 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778933 29458 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 22:13:50.781814 master-0 kubenswrapper[29458]: I0308 22:13:50.778938 29458 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778942 29458 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778946 29458 flags.go:64] FLAG: --cgroup-root="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778951 29458 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778955 29458 flags.go:64] FLAG: --client-ca-file="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778960 29458 flags.go:64] FLAG: --cloud-config="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778964 29458 flags.go:64] FLAG: --cloud-provider="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778969 29458 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778975 29458 flags.go:64] FLAG: --cluster-domain="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778979 29458 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778986 29458 flags.go:64] FLAG: --config-dir="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778991 29458 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.778996 29458 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779004 29458 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779009 29458 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779014 29458 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779019 29458 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779023 29458 flags.go:64] FLAG: --contention-profiling="false" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779029 29458 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779035 29458 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779040 29458 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779045 29458 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779052 29458 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779057 29458 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779062 29458 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 22:13:50.782849 master-0 kubenswrapper[29458]: I0308 22:13:50.779086 29458 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779093 29458 flags.go:64] FLAG: --enable-server="true" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779100 29458 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779107 29458 flags.go:64] FLAG: --event-burst="100" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779113 29458 flags.go:64] FLAG: --event-qps="50" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779117 29458 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779123 29458 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779129 29458 flags.go:64] FLAG: --eviction-hard="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779136 29458 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779141 29458 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779147 29458 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779153 29458 flags.go:64] FLAG: --eviction-soft="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779158 29458 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779163 29458 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779168 29458 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779172 29458 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779177 29458 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779183 29458 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779188 29458 flags.go:64] FLAG: --feature-gates="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779194 29458 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779198 29458 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779203 29458 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779207 29458 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779211 29458 flags.go:64] FLAG: --healthz-port="10248" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779215 29458 flags.go:64] FLAG: --help="false" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779220 29458 flags.go:64] FLAG: --hostname-override="" Mar 08 22:13:50.784008 master-0 kubenswrapper[29458]: I0308 22:13:50.779225 29458 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779229 29458 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779234 29458 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779238 29458 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779242 29458 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779247 29458 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779251 29458 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779255 29458 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779260 29458 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779264 29458 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779268 29458 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779273 29458 flags.go:64] FLAG: --kube-reserved="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779277 29458 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779281 29458 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779286 29458 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779290 29458 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779294 29458 flags.go:64] FLAG: --lock-file="" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779298 29458 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779303 29458 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779307 29458 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779314 29458 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779318 29458 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779322 29458 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779328 29458 flags.go:64] FLAG: --logging-format="text" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779333 29458 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 22:13:50.791903 master-0 kubenswrapper[29458]: I0308 22:13:50.779338 29458 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779342 29458 flags.go:64] FLAG: --manifest-url="" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779346 29458 flags.go:64] FLAG: --manifest-url-header="" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779353 29458 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779357 29458 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779363 29458 flags.go:64] FLAG: --max-pods="110" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779367 29458 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779372 29458 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779376 29458 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779381 29458 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779386 29458 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779390 29458 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779394 29458 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779404 29458 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779408 29458 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779415 29458 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779420 29458 flags.go:64] FLAG: --pod-cidr="" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779424 29458 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779432 29458 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779437 29458 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779441 29458 flags.go:64] FLAG: --pods-per-core="0" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779446 29458 flags.go:64] FLAG: --port="10250" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779450 29458 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779454 29458 flags.go:64] FLAG: --provider-id="" Mar 08 22:13:50.793213 master-0 kubenswrapper[29458]: I0308 22:13:50.779459 29458 flags.go:64] FLAG: --qos-reserved="" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779476 29458 flags.go:64] FLAG: --read-only-port="10255" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779482 29458 flags.go:64] FLAG: --register-node="true" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779487 29458 flags.go:64] FLAG: --register-schedulable="true" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779491 29458 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779499 29458 flags.go:64] FLAG: --registry-burst="10" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779503 29458 flags.go:64] FLAG: --registry-qps="5" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779511 29458 flags.go:64] FLAG: --reserved-cpus="" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779515 29458 flags.go:64] FLAG: --reserved-memory="" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779521 29458 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779525 29458 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779530 29458 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779534 29458 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779538 29458 flags.go:64] FLAG: --runonce="false" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779543 29458 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779554 29458 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779560 29458 flags.go:64] FLAG: --seccomp-default="false" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779566 29458 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779571 29458 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779576 29458 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779582 29458 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779587 29458 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779592 29458 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779599 29458 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779603 29458 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 22:13:50.794106 master-0 kubenswrapper[29458]: I0308 22:13:50.779607 29458 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779612 29458 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779616 29458 flags.go:64] FLAG: --system-cgroups="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779620 29458 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779628 29458 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779632 29458 flags.go:64] FLAG: --tls-cert-file="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779636 29458 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779642 29458 flags.go:64] FLAG: --tls-min-version="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779646 29458 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779650 29458 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779654 29458 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779658 29458 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779662 29458 flags.go:64] FLAG: --v="2" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779669 29458 flags.go:64] FLAG: --version="false" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779676 29458 flags.go:64] FLAG: --vmodule="" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779682 29458 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: I0308 22:13:50.779686 29458 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779797 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779802 29458 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779806 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779811 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779815 29458 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779818 29458 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779822 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 22:13:50.795606 master-0 kubenswrapper[29458]: W0308 22:13:50.779826 29458 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779830 29458 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779834 29458 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779837 29458 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779841 29458 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779845 29458 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779850 29458 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779867 29458 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779872 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779877 29458 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779881 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779885 29458 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779889 29458 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779893 29458 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779896 29458 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779901 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779904 29458 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779908 29458 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779911 29458 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779915 29458 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 22:13:50.796874 master-0 kubenswrapper[29458]: W0308 22:13:50.779919 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779922 29458 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779928 29458 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779932 29458 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779935 29458 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779940 29458 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779945 29458 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779950 29458 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779954 29458 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779958 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779962 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779966 29458 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779970 29458 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779974 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779978 29458 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779982 29458 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779986 29458 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779990 29458 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.779996 29458 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.780000 29458 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 22:13:50.797622 master-0 kubenswrapper[29458]: W0308 22:13:50.780004 29458 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780009 29458 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780013 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780016 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780021 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780025 29458 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780029 29458 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780032 29458 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780036 29458 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780040 29458 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780043 29458 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780047 29458 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780050 29458 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780054 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780060 29458 feature_gate.go:330] unrecognized feature gate: Example Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780064 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780087 29458 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780095 29458 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780099 29458 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 22:13:50.798364 master-0 kubenswrapper[29458]: W0308 22:13:50.780104 29458 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.780110 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.780114 29458 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.780118 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.780122 29458 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.780126 29458 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: I0308 22:13:50.780138 29458 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: I0308 22:13:50.796575 29458 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: I0308 22:13:50.796629 29458 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796855 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796870 29458 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796882 29458 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796892 29458 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796903 29458 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796913 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 22:13:50.799435 master-0 kubenswrapper[29458]: W0308 22:13:50.796924 29458 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.796935 29458 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.796945 29458 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.796954 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.796985 29458 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.796996 29458 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797007 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797018 29458 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797029 29458 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797043 29458 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797054 29458 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797063 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797094 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797104 29458 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797112 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797129 29458 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797140 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797149 29458 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 22:13:50.800089 master-0 kubenswrapper[29458]: W0308 22:13:50.797158 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797167 29458 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797176 29458 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797185 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797193 29458 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797202 29458 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797210 29458 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797219 29458 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797227 29458 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797240 29458 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797249 29458 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797257 29458 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797268 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797289 29458 feature_gate.go:330] unrecognized feature gate: Example Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797297 29458 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797305 29458 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797313 29458 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797321 29458 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797329 29458 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797341 29458 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 22:13:50.801893 master-0 kubenswrapper[29458]: W0308 22:13:50.797351 29458 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797359 29458 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797372 29458 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797380 29458 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797389 29458 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797400 29458 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797411 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797421 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797429 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797438 29458 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797447 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797457 29458 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797466 29458 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797475 29458 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797488 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797497 29458 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797505 29458 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797513 29458 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797522 29458 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797530 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 22:13:50.802906 master-0 kubenswrapper[29458]: W0308 22:13:50.797538 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797547 29458 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797555 29458 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797564 29458 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797572 29458 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797580 29458 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797588 29458 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.797601 29458 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: I0308 22:13:50.797630 29458 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798099 29458 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798115 29458 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798128 29458 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798139 29458 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798149 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798158 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 22:13:50.804104 master-0 kubenswrapper[29458]: W0308 22:13:50.798168 29458 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798177 29458 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798186 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798195 29458 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798204 29458 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798213 29458 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798227 29458 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798238 29458 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798249 29458 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798258 29458 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798267 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798275 29458 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798283 29458 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798291 29458 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798300 29458 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798309 29458 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798317 29458 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798325 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798339 29458 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798347 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 22:13:50.804711 master-0 kubenswrapper[29458]: W0308 22:13:50.798355 29458 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798366 29458 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798377 29458 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798389 29458 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798402 29458 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798412 29458 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798420 29458 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798429 29458 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798437 29458 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798445 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798459 29458 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798467 29458 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798476 29458 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798485 29458 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798493 29458 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798501 29458 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798510 29458 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798519 29458 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798528 29458 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 22:13:50.805503 master-0 kubenswrapper[29458]: W0308 22:13:50.798536 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798545 29458 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798555 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798568 29458 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798577 29458 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798585 29458 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798593 29458 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798601 29458 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798610 29458 feature_gate.go:330] unrecognized feature gate: Example Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798618 29458 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798627 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798635 29458 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798643 29458 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798652 29458 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798660 29458 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798669 29458 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798682 29458 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798691 29458 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798699 29458 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798707 29458 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 22:13:50.806248 master-0 kubenswrapper[29458]: W0308 22:13:50.798716 29458 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: W0308 22:13:50.798724 29458 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: W0308 22:13:50.798732 29458 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: W0308 22:13:50.798740 29458 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: W0308 22:13:50.798750 29458 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: W0308 22:13:50.798758 29458 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: W0308 22:13:50.798766 29458 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: I0308 22:13:50.798779 29458 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 22:13:50.806962 master-0 kubenswrapper[29458]: I0308 22:13:50.803400 29458 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 22:13:50.807386 master-0 kubenswrapper[29458]: I0308 22:13:50.807335 29458 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 08 22:13:50.807559 master-0 kubenswrapper[29458]: I0308 22:13:50.807519 29458 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 08 22:13:50.808009 master-0 kubenswrapper[29458]: I0308 22:13:50.807968 29458 server.go:997] "Starting client certificate rotation" Mar 08 22:13:50.808009 master-0 kubenswrapper[29458]: I0308 22:13:50.807997 29458 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 22:13:50.808299 master-0 kubenswrapper[29458]: I0308 22:13:50.808175 29458 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 15:05:24.367667126 +0000 UTC Mar 08 22:13:50.808299 master-0 kubenswrapper[29458]: I0308 22:13:50.808283 29458 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h51m33.559387554s for next certificate rotation Mar 08 22:13:50.809316 master-0 kubenswrapper[29458]: I0308 22:13:50.809206 29458 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 22:13:50.811750 master-0 kubenswrapper[29458]: I0308 22:13:50.811701 29458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 22:13:50.816983 master-0 kubenswrapper[29458]: I0308 22:13:50.816352 29458 log.go:25] "Validated CRI v1 runtime API" Mar 08 22:13:50.821389 master-0 kubenswrapper[29458]: I0308 22:13:50.821328 29458 log.go:25] "Validated CRI v1 image API" Mar 08 22:13:50.822880 master-0 kubenswrapper[29458]: I0308 22:13:50.822448 29458 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 22:13:50.836397 master-0 kubenswrapper[29458]: I0308 22:13:50.836338 29458 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f06a6435-a0b4-459f-8b49-c9a78e9e4f0c:/dev/vda3] Mar 08 22:13:50.837815 master-0 kubenswrapper[29458]: I0308 22:13:50.836392 29458 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669/userdata/shm major:0 minor:446 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c50be0fc3f4780032df6f771d4507e5bf45df79f6025c39b105620c89303b83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c50be0fc3f4780032df6f771d4507e5bf45df79f6025c39b105620c89303b83/userdata/shm major:0 minor:1010 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0de0dd88c4bba9f852c91550e6622cdfe9b4a30a405c23edc2a915817b573fec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0de0dd88c4bba9f852c91550e6622cdfe9b4a30a405c23edc2a915817b573fec/userdata/shm major:0 minor:512 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9/userdata/shm major:0 minor:1073 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f/userdata/shm major:0 minor:481 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16/userdata/shm major:0 minor:407 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7/userdata/shm major:0 minor:834 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/270111bd9a880fa859abff7a300a5a42546d0f86314f375208a892a811a648e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/270111bd9a880fa859abff7a300a5a42546d0f86314f375208a892a811a648e7/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e34987c76ae3161515e58a685409125bb3c2f2c0b1e13425d28a3f54cc0d97c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e34987c76ae3161515e58a685409125bb3c2f2c0b1e13425d28a3f54cc0d97c/userdata/shm major:0 minor:947 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19/userdata/shm major:0 minor:893 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/318c84ebaf730c7c85b63db579f8af63f5545b50f015236d0cbd1a16b9495c4d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/318c84ebaf730c7c85b63db579f8af63f5545b50f015236d0cbd1a16b9495c4d/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876/userdata/shm major:0 minor:59 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81/userdata/shm major:0 minor:334 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461/userdata/shm major:0 minor:445 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07/userdata/shm major:0 minor:1035 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab/userdata/shm major:0 minor:841 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7/userdata/shm major:0 minor:756 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d/userdata/shm major:0 minor:874 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923/userdata/shm major:0 minor:606 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/53b5043fd325310586d0ad90805405242c17d1ce6d248bad4d8308d740dacd52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/53b5043fd325310586d0ad90805405242c17d1ce6d248bad4d8308d740dacd52/userdata/shm major:0 minor:509 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/546b6a60e0c7d74e50a429925cb5072388fd5ebf8c592233957d28ac0705b80e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/546b6a60e0c7d74e50a429925cb5072388fd5ebf8c592233957d28ac0705b80e/userdata/shm major:0 minor:1003 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/556cd17b0dd9a0437b38f51d3f691ed442f4e900ac26991a4d6a0e87a7a93e20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/556cd17b0dd9a0437b38f51d3f691ed442f4e900ac26991a4d6a0e87a7a93e20/userdata/shm major:0 minor:573 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5acb1dbbaadd24be1aa51015d4ffabe0583806b310c9bb173c49c064dc0af3d3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5acb1dbbaadd24be1aa51015d4ffabe0583806b310c9bb173c49c064dc0af3d3/userdata/shm major:0 minor:477 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573/userdata/shm major:0 minor:671 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002/userdata/shm major:0 minor:382 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c/userdata/shm major:0 minor:447 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65b211739156dcea6c9fedd48dbe1e6cb8361762b8f9a787cf0192fa0b5059a7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65b211739156dcea6c9fedd48dbe1e6cb8361762b8f9a787cf0192fa0b5059a7/userdata/shm major:0 minor:459 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d/userdata/shm major:0 minor:1182 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8/userdata/shm major:0 minor:570 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/940096d4a40b7dc6434a7295ac74e546aac8e0fdcf673fbbc4587227bf159807/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/940096d4a40b7dc6434a7295ac74e546aac8e0fdcf673fbbc4587227bf159807/userdata/shm major:0 minor:674 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d2b94760fb5bd6c1ac833545141ede88958ba2ac4b1af0ff830a401107ab2f9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d2b94760fb5bd6c1ac833545141ede88958ba2ac4b1af0ff830a401107ab2f9/userdata/shm major:0 minor:511 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79/userdata/shm major:0 minor:871 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a3c825039f429bbbe3e7e27ef1491ff9c435ad7f4d68ed1d1f7b0b138f9a2544/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a3c825039f429bbbe3e7e27ef1491ff9c435ad7f4d68ed1d1f7b0b138f9a2544/userdata/shm major:0 minor:839 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5f486dd57f083148217b384b5e4b7e4ee2cd439fe07291b198c3cd32fbe85ef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5f486dd57f083148217b384b5e4b7e4ee2cd439fe07291b198c3cd32fbe85ef/userdata/shm major:0 minor:726 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e/userdata/shm major:0 minor:676 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110/userdata/shm major:0 minor:479 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3c767d6aca988650063d67045483c4316fb23551293f63bcb6227962e14fff7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3c767d6aca988650063d67045483c4316fb23551293f63bcb6227962e14fff7/userdata/shm major:0 minor:1008 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9/userdata/shm major:0 minor:850 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d/userdata/shm major:0 minor:98 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a/userdata/shm major:0 minor:506 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d14eb63d678bcf527293b2268e60d6e7c54629d3617ad205aa85e0b95e38c0c8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d14eb63d678bcf527293b2268e60d6e7c54629d3617ad205aa85e0b95e38c0c8/userdata/shm major:0 minor:507 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013/userdata/shm major:0 minor:482 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e/userdata/shm major:0 minor:478 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3f24d18018ae4fd0cde9a9605ef8a24287eac4d74c241af3ae19429f61d0495/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3f24d18018ae4fd0cde9a9605ef8a24287eac4d74c241af3ae19429f61d0495/userdata/shm major:0 minor:716 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e/userdata/shm major:0 minor:1072 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dcce2795ffc43a6cd86e6b9ec76eb643d8b1c22dbdc50b3b5ab3767ff2108c08/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dcce2795ffc43a6cd86e6b9ec76eb643d8b1c22dbdc50b3b5ab3767ff2108c08/userdata/shm major:0 minor:1078 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687/userdata/shm major:0 minor:510 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699/userdata/shm major:0 minor:1167 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf/userdata/shm major:0 minor:845 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf/userdata/shm major:0 minor:978 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f08d60c032a49069a33366a771add75613c8b164c10de5edc94cf407f1fce2c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f08d60c032a49069a33366a771add75613c8b164c10de5edc94cf407f1fce2c7/userdata/shm major:0 minor:868 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047/userdata/shm major:0 minor:837 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a/userdata/shm major:0 minor:972 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968/userdata/shm major:0 minor:325 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00db426a-15d4-4737-a85e-b4cf6362c759/volumes/kubernetes.io~projected/kube-api-access-86mrp:{mountpoint:/var/lib/kubelet/pods/00db426a-15d4-4737-a85e-b4cf6362c759/volumes/kubernetes.io~projected/kube-api-access-86mrp major:0 minor:1181 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00db426a-15d4-4737-a85e-b4cf6362c759/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/00db426a-15d4-4737-a85e-b4cf6362c759/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~projected/kube-api-access-8fp4g:{mountpoint:/var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~projected/kube-api-access-8fp4g major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1064 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1066 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~projected/kube-api-access major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/077643a2-ab2d-4f12-9abf-42a1c56d7aff/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/077643a2-ab2d-4f12-9abf-42a1c56d7aff/volumes/kubernetes.io~projected/ca-certs major:0 minor:693 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/077643a2-ab2d-4f12-9abf-42a1c56d7aff/volumes/kubernetes.io~projected/kube-api-access-mp26r:{mountpoint:/var/lib/kubelet/pods/077643a2-ab2d-4f12-9abf-42a1c56d7aff/volumes/kubernetes.io~projected/kube-api-access-mp26r major:0 minor:692 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~projected/kube-api-access-cpxls:{mountpoint:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~projected/kube-api-access-cpxls major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b/volumes/kubernetes.io~projected/kube-api-access-w5t9m:{mountpoint:/var/lib/kubelet/pods/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b/volumes/kubernetes.io~projected/kube-api-access-w5t9m major:0 minor:867 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cb21214-292a-48ee-85e2-6b1e62f40cb4/volumes/kubernetes.io~projected/kube-api-access-sg2dp:{mountpoint:/var/lib/kubelet/pods/0cb21214-292a-48ee-85e2-6b1e62f40cb4/volumes/kubernetes.io~projected/kube-api-access-sg2dp major:0 minor:658 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cb21214-292a-48ee-85e2-6b1e62f40cb4/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/0cb21214-292a-48ee-85e2-6b1e62f40cb4/volumes/kubernetes.io~secret/metrics-tls major:0 minor:670 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d0feb73-2ef6-4083-81ce-82a1394ce9c4/volumes/kubernetes.io~projected/kube-api-access-jfpt7:{mountpoint:/var/lib/kubelet/pods/0d0feb73-2ef6-4083-81ce-82a1394ce9c4/volumes/kubernetes.io~projected/kube-api-access-jfpt7 major:0 minor:437 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~projected/kube-api-access-7tlmx:{mountpoint:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~projected/kube-api-access-7tlmx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/10e2e81b-cd18-4e30-b8ad-4cf105daea4a/volumes/kubernetes.io~projected/kube-api-access-sjndf:{mountpoint:/var/lib/kubelet/pods/10e2e81b-cd18-4e30-b8ad-4cf105daea4a/volumes/kubernetes.io~projected/kube-api-access-sjndf major:0 minor:1004 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~projected/kube-api-access-pcqnj:{mountpoint:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~projected/kube-api-access-pcqnj major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ef14467-bb62-462d-9dec-dee43e4cc9bd/volumes/kubernetes.io~projected/kube-api-access-6tfdv:{mountpoint:/var/lib/kubelet/pods/1ef14467-bb62-462d-9dec-dee43e4cc9bd/volumes/kubernetes.io~projected/kube-api-access-6tfdv major:0 minor:648 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ef14467-bb62-462d-9dec-dee43e4cc9bd/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/1ef14467-bb62-462d-9dec-dee43e4cc9bd/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:623 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes/kubernetes.io~projected/kube-api-access-7v6dc:{mountpoint:/var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes/kubernetes.io~projected/kube-api-access-7v6dc major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes/kubernetes.io~secret/serving-cert major:0 minor:562 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~projected/kube-api-access-2l47w:{mountpoint:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~projected/kube-api-access-2l47w major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:470 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:444 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~projected/ca-certs major:0 minor:689 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~projected/kube-api-access-ftn6p:{mountpoint:/var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~projected/kube-api-access-ftn6p major:0 minor:691 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:690 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8/volumes/kubernetes.io~projected/kube-api-access-dqkp4:{mountpoint:/var/lib/kubelet/pods/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8/volumes/kubernetes.io~projected/kube-api-access-dqkp4 major:0 minor:864 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:863 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/345ca27a-f572-4efa-b0ce-dfa8243becd6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/345ca27a-f572-4efa-b0ce-dfa8243becd6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:379 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37bf82cb-adea-46d3-a899-136eb1d1f292/volumes/kubernetes.io~projected/kube-api-access-v6ht7:{mountpoint:/var/lib/kubelet/pods/37bf82cb-adea-46d3-a899-136eb1d1f292/volumes/kubernetes.io~projected/kube-api-access-v6ht7 major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/385e69e4-d443-44bb-8ee4-578a1c902c62/volumes/kubernetes.io~projected/kube-api-access-vxg7t:{mountpoint:/var/lib/kubelet/pods/385e69e4-d443-44bb-8ee4-578a1c902c62/volumes/kubernetes.io~projected/kube-api-access-vxg7t major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~projected/kube-api-access-ff6pm:{mountpoint:/var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~projected/kube-api-access-ff6pm major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~secret/srv-cert major:0 minor:544 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e38e989-41b8-4c80-99fb-8d414dda5da1/volumes/kubernetes.io~projected/kube-api-access-jp86m:{mountpoint:/var/lib/kubelet/pods/3e38e989-41b8-4c80-99fb-8d414dda5da1/volumes/kubernetes.io~projected/kube-api-access-jp86m major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e38e989-41b8-4c80-99fb-8d414dda5da1/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/3e38e989-41b8-4c80-99fb-8d414dda5da1/volumes/kubernetes.io~secret/proxy-tls major:0 minor:495 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~projected/kube-api-access-96gl4:{mountpoint:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~projected/kube-api-access-96gl4 major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~projected/kube-api-access-2hstt:{mountpoint:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~projected/kube-api-access-2hstt major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/etcd-client major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~projected/kube-api-access-zl4xt:{mountpoint:/var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~projected/kube-api-access-zl4xt major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~secret/metrics-certs major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~projected/kube-api-access-hq7xb:{mountpoint:/var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~projected/kube-api-access-hq7xb major:0 minor:962 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~secret/certs major:0 minor:960 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:961 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/volumes/kubernetes.io~projected/kube-api-access-gxxvr:{mountpoint:/var/lib/kubelet/pods/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/volumes/kubernetes.io~projected/kube-api-access-gxxvr major:0 minor:849 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:848 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~projected/kube-api-access-lhp8w:{mountpoint:/var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~projected/kube-api-access-lhp8w major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:803 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~secret/webhook-cert major:0 minor:832 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4eec590b-c536-4b16-a664-81bc3c74eef5/volumes/kubernetes.io~projected/kube-api-access-k67bc:{mountpoint:/var/lib/kubelet/pods/4eec590b-c536-4b16-a664-81bc3c74eef5/volumes/kubernetes.io~projected/kube-api-access-k67bc major:0 minor:308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~projected/kube-api-access-qzlpq:{mountpoint:/var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~projected/kube-api-access-qzlpq major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:466 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0/volumes/kubernetes.io~projected/kube-api-access-jb2lv:{mountpoint:/var/lib/kubelet/pods/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0/volumes/kubernetes.io~projected/kube-api-access-jb2lv major:0 minor:673 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/66e50eed-e3ac-431f-931b-7c4c848c491b/volumes/kubernetes.io~projected/kube-api-access-bjrqj:{mountpoint:/var/lib/kubelet/pods/66e50eed-e3ac-431f-931b-7c4c848c491b/volumes/kubernetes.io~projected/kube-api-access-bjrqj major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/66e50eed-e3ac-431f-931b-7c4c848c491b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/66e50eed-e3ac-431f-931b-7c4c848c491b/volumes/kubernetes.io~secret/serving-cert major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6eb502a1-db10-46ba-b698-461919464fb9/volumes/kubernetes.io~projected/kube-api-access-sjlqz:{mountpoint:/var/lib/kubelet/pods/6eb502a1-db10-46ba-b698-461919464fb9/volumes/kubernetes.io~projected/kube-api-access-sjlqz major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6eb502a1-db10-46ba-b698-461919464fb9/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/6eb502a1-db10-46ba-b698-461919464fb9/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:822 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3/volumes/kubernetes.io~projected/kube-api-access-shdtk:{mountpoint:/var/lib/kubelet/pods/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3/volumes/kubernetes.io~projected/kube-api-access-shdtk major:0 minor:946 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3/volumes/kubernetes.io~secret/proxy-tls major:0 minor:881 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~projected/kube-api-access-jjt52:{mountpoint:/var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~projected/kube-api-access-jjt52 major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:464 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~projected/kube-api-access-7kz92:{mountpoint:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~projected/kube-api-access-7kz92 major:0 minor:1001 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/default-certificate major:0 minor:993 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1000 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/stats-auth major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~projected/kube-api-access-5pwq4:{mountpoint:/var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~projected/kube-api-access-5pwq4 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~secret/srv-cert major:0 minor:545 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/kube-api-access-vwdhp:{mountpoint:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/kube-api-access-vwdhp major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~secret/metrics-tls major:0 minor:505 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89619d97-2c16-4e76-ba80-8b519f6a9366/volumes/kubernetes.io~projected/kube-api-access-zj5rx:{mountpoint:/var/lib/kubelet/pods/89619d97-2c16-4e76-ba80-8b519f6a9366/volumes/kubernetes.io~projected/kube-api-access-zj5rx major:0 minor:653 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~projected/kube-api-access-qdz7m:{mountpoint:/var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~projected/kube-api-access-qdz7m major:0 minor:1032 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1030 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1031 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96a67acb-9cc6-4793-b99a-01479b239d76/volumes/kubernetes.io~projected/kube-api-access-d9xj9:{mountpoint:/var/lib/kubelet/pods/96a67acb-9cc6-4793-b99a-01479b239d76/volumes/kubernetes.io~projected/kube-api-access-d9xj9 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~projected/kube-api-access-7z7fx:{mountpoint:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~projected/kube-api-access-7z7fx major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~projected/kube-api-access-7xcbb:{mountpoint:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~projected/kube-api-access-7xcbb major:0 minor:95 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~projected/kube-api-access-mvp5b:{mountpoint:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~projected/kube-api-access-mvp5b major:0 minor:669 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/encryption-config major:0 minor:660 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/etcd-client major:0 minor:668 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/serving-cert major:0 minor:667 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~projected/kube-api-access-gwqqw:{mountpoint:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~projected/kube-api-access-gwqqw major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/kube-api-access-drcp8:{mountpoint:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/kube-api-access-drcp8 maj Mar 08 22:13:50.838397 master-0 kubenswrapper[29458]: or:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:503 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~projected/kube-api-access-lpb8q:{mountpoint:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~projected/kube-api-access-lpb8q major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/encryption-config major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/etcd-client major:0 minor:461 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/serving-cert major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1207b6b-0517-46eb-9953-737f2bf1040d/volumes/kubernetes.io~projected/kube-api-access-d2lsl:{mountpoint:/var/lib/kubelet/pods/b1207b6b-0517-46eb-9953-737f2bf1040d/volumes/kubernetes.io~projected/kube-api-access-d2lsl major:0 minor:326 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b358dcb7-d01f-4206-b636-b55a599a73bd/volumes/kubernetes.io~projected/kube-api-access-bmdmr:{mountpoint:/var/lib/kubelet/pods/b358dcb7-d01f-4206-b636-b55a599a73bd/volumes/kubernetes.io~projected/kube-api-access-bmdmr major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6bc6f78-2c5c-4add-925f-f6568a49c2cc/volumes/kubernetes.io~projected/kube-api-access-c52wj:{mountpoint:/var/lib/kubelet/pods/b6bc6f78-2c5c-4add-925f-f6568a49c2cc/volumes/kubernetes.io~projected/kube-api-access-c52wj major:0 minor:977 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6bc6f78-2c5c-4add-925f-f6568a49c2cc/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/b6bc6f78-2c5c-4add-925f-f6568a49c2cc/volumes/kubernetes.io~secret/proxy-tls major:0 minor:973 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~projected/kube-api-access major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~projected/kube-api-access-tv57k:{mountpoint:/var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~projected/kube-api-access-tv57k major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:543 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad/volumes/kubernetes.io~projected/kube-api-access-sdfls:{mountpoint:/var/lib/kubelet/pods/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad/volumes/kubernetes.io~projected/kube-api-access-sdfls major:0 minor:655 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:654 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~projected/kube-api-access-4z4s4:{mountpoint:/var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~projected/kube-api-access-4z4s4 major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1065 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~projected/kube-api-access-z2nfk:{mountpoint:/var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~projected/kube-api-access-z2nfk major:0 minor:1071 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c901b468-b8e9-48f8-8050-0d54e24e2adb/volumes/kubernetes.io~projected/kube-api-access-hmfqq:{mountpoint:/var/lib/kubelet/pods/c901b468-b8e9-48f8-8050-0d54e24e2adb/volumes/kubernetes.io~projected/kube-api-access-hmfqq major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d063b330-4180-43de-a248-c573183d96f1/volumes/kubernetes.io~projected/kube-api-access-8v2k8:{mountpoint:/var/lib/kubelet/pods/d063b330-4180-43de-a248-c573183d96f1/volumes/kubernetes.io~projected/kube-api-access-8v2k8 major:0 minor:971 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d063b330-4180-43de-a248-c573183d96f1/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/d063b330-4180-43de-a248-c573183d96f1/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:970 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~projected/kube-api-access-784c7:{mountpoint:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~projected/kube-api-access-784c7 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~projected/kube-api-access-ngf2z:{mountpoint:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~projected/kube-api-access-ngf2z major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~projected/kube-api-access-9l82d:{mountpoint:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~projected/kube-api-access-9l82d major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~projected/kube-api-access-znqrj:{mountpoint:/var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~projected/kube-api-access-znqrj major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~secret/cert major:0 minor:880 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:885 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9fe466f-5a23-4f69-8a96-44bd5d6194f5/volumes/kubernetes.io~projected/kube-api-access-nvmk7:{mountpoint:/var/lib/kubelet/pods/d9fe466f-5a23-4f69-8a96-44bd5d6194f5/volumes/kubernetes.io~projected/kube-api-access-nvmk7 major:0 minor:907 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9fe466f-5a23-4f69-8a96-44bd5d6194f5/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/d9fe466f-5a23-4f69-8a96-44bd5d6194f5/volumes/kubernetes.io~secret/cert major:0 minor:904 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes/kubernetes.io~projected/kube-api-access-clxsk:{mountpoint:/var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes/kubernetes.io~projected/kube-api-access-clxsk major:0 minor:564 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes/kubernetes.io~secret/serving-cert major:0 minor:524 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~projected/kube-api-access-7h4vv:{mountpoint:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~projected/kube-api-access-7h4vv major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~projected/kube-api-access-4dr4p:{mountpoint:/var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~projected/kube-api-access-4dr4p major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~secret/metrics-tls major:0 minor:504 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~projected/kube-api-access-j9c64:{mountpoint:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~projected/kube-api-access-j9c64 major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e635b0da-956b-4636-bc9b-61f231241908/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/e635b0da-956b-4636-bc9b-61f231241908/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1002 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8ef68b9-6f8d-4697-b269-91ee4e310752/volumes/kubernetes.io~projected/kube-api-access-6ht4t:{mountpoint:/var/lib/kubelet/pods/e8ef68b9-6f8d-4697-b269-91ee4e310752/volumes/kubernetes.io~projected/kube-api-access-6ht4t major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8ef68b9-6f8d-4697-b269-91ee4e310752/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/e8ef68b9-6f8d-4697-b269-91ee4e310752/volumes/kubernetes.io~secret/signing-key major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~projected/kube-api-access-pq2ch:{mountpoint:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~projected/kube-api-access-pq2ch major:0 minor:1144 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:591 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e/volumes/kubernetes.io~projected/kube-api-access-l5xq4:{mountpoint:/var/lib/kubelet/pods/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e/volumes/kubernetes.io~projected/kube-api-access-l5xq4 major:0 minor:404 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:557 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~empty-dir/tmp major:0 minor:558 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~projected/kube-api-access-5jwf9:{mountpoint:/var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~projected/kube-api-access-5jwf9 major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~projected/kube-api-access major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/volumes/kubernetes.io~projected/kube-api-access major:0 minor:701 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/volumes/kubernetes.io~secret/serving-cert major:0 minor:702 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd9abe2b-f829-4376-9abe-7da0a08770e7/volumes/kubernetes.io~projected/kube-api-access-vxssr:{mountpoint:/var/lib/kubelet/pods/fd9abe2b-f829-4376-9abe-7da0a08770e7/volumes/kubernetes.io~projected/kube-api-access-vxssr major:0 minor:866 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd9abe2b-f829-4376-9abe-7da0a08770e7/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/fd9abe2b-f829-4376-9abe-7da0a08770e7/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:865 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/4b9ef557e4c58fb7270c28558229e246777c7270722dd6d61328efea31d3bf3e/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1006:{mountpoint:/var/lib/containers/storage/overlay/b2140f867970a6431d7eca08fa7db2e5ee16f3f47f872afdbd7dee70579c31ee/merged major:0 minor:1006 fsType:overlay blockSize:0} overlay_0-1012:{mountpoint:/var/lib/containers/storage/overlay/df1cbeee05db99b7e5903beebc286695b88dd1c276c8c986a023c2e85ab35d86/merged major:0 minor:1012 fsType:overlay blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/2fc28a9e5b11228a97688efb59bce7d40605ad1ed81cc3c42a28cb5f99bef5b3/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/f6d94d841a8668130fef2e23b6e25afb6e7d162de61d9a4ae375d57fb26b9a22/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/4a39551357091a9aadd2bfd9a6c4f7ffe7e95b273715432184998b8d818e4c70/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/f955515d0892db748d3afda3e6d6141a6fe0d2c9f21dd890521b56021d2fbab4/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/2e039b125a78db2312348505caf5f4694f72ce089929303004c621b5bf1ec5e6/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/0cff8fc6eef9f22f9a7b6a8d92ceb44fb975894d966486c4b8160f14e087efc4/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1042:{mountpoint:/var/lib/containers/storage/overlay/85ffd74f8678c638a607fc015583a497498c2bc019404b6bc079be8f7d3b00ee/merged major:0 minor:1042 fsType:overlay blockSize:0} overlay_0-1044:{mountpoint:/var/lib/containers/storage/overlay/d4709955ebfcc621ff109a7b404af02e8707e02674deef4001aeea77383fc5f1/merged major:0 minor:1044 fsType:overlay blockSize:0} overlay_0-1046:{mountpoint:/var/lib/containers/storage/overlay/fb855be4278fc3ac5774482e936a092f0ff7dbd03b3d7d869ed22a7c494dd295/merged major:0 minor:1046 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/417da8e97eccfa54ce76e55e757f13fe4b3465a1d90af4ee7743329975661760/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/b2cde00e9adfbe0fb43665ad15603f924d35fe61a29ab9eeb475aef34e6d6816/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-1075:{mountpoint:/var/lib/containers/storage/overlay/ecf289c49f72390a8987e5c2c5cb6ffa0f0f61b57033e2727143d8007f17eac1/merged major:0 minor:1075 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/899d7cdb1048f8c9af889adad4a54781ee127a54f5f3773de0258d8b9531db54/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1080:{mountpoint:/var/lib/containers/storage/overlay/077f85449e4fe3ac40cd57a733e556e3937b815eb4bdbdfc6fc7dbfb453b799a/merged major:0 minor:1080 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/5fb59e9a67c861bf8094af1a463d9ad49b001e5d2009696aca31161d038ba5b4/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/c392d23cd1c66a597e1dc989ac4dd65b378346302055fef02d230852f218584e/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/a6a49ec4ef1da7e953a23127d4235f35e6a9275497078fc5129ef1af98408414/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/28c3acaced8a8ee39eee9e552efe6d48f0008a2f9ac9d4682e5c4a28a936651a/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/f3b5b90b22aa329f5d204100a2cf6c4ae63ba8fc911ce10c721a5ce43be9b45f/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/5a87afaaa81fe6d43a39c9fe17d7301d88bb9e347077af4653d4d4e23ce18fec/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1099:{mountpoint:/var/lib/containers/storage/overlay/be83192df1b59dff0592d5764fb95f0a5f6df0fa175e5813815d342d97600231/merged major:0 minor:1099 fsType:overlay blockSize:0} overlay_0-1101:{mountpoint:/var/lib/containers/storage/overlay/8f60bc51a6138b9db78dbff3b64b7cc6c0a295e72ff482fd45885fa0e7553968/merged major:0 minor:1101 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/7bbb33ede96544ae0e773494941c01fd8e77f8b11b8966ad9dd770879b430cc5/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1118:{mountpoint:/var/lib/containers/storage/overlay/4edfd5a64341eca434f6638d18cc219bde33260fcee2ac6ad2bef9a15bd78635/merged major:0 minor:1118 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/57cf6096f30f9442f302a08c1c1dcf61f45ec42b3dc4e14138079ce418ed8e9e/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/65a57ce1cca4dd7b2645d17794b7b469f5ba7bd12b6a0557e19a96c0c8f11622/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/2868b8dca02fc79bba60f373c317f0c252de37c00d3b1ca06e9681dd868326c6/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1146:{mountpoint:/var/lib/containers/storage/overlay/21a88653a70e4b87d6f6bdc4ac584a4e96c7ed195597affb5bdf15b7ef8fa12c/merged major:0 minor:1146 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/9ae1c55f2496c6ceff5723391c838674fba7f4b7090ecb86d435572168fac9f9/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1170:{mountpoint:/var/lib/containers/storage/overlay/c56f088c06d4d6f3a28625746f85e2deb3a5a22dc689a16f6edb9885c646f922/merged major:0 minor:1170 fsType:overlay blockSize:0} overlay_0-1179:{mountpoint:/var/lib/containers/storage/overlay/19bc7f11b61224eb50552759c2877864871f953e888b5468f230e4b1446f8288/merged major:0 minor:1179 fsType:overlay blockSize:0} overlay_0-1184:{mountpoint:/var/lib/containers/storage/overlay/cad409f56504a5e1cb5cf161be66c332134139a78e373c56ef66d0e8a7784024/merged major:0 minor:1184 fsType:overlay blockSize:0} overlay_0-1186:{mountpoint:/var/lib/containers/storage/overlay/3021b0fce04c7db7aec6d512542e15c987bc04d81ae7c3e2eb02e556c19b7dad/merged major:0 minor:1186 fsType:overlay blockSize:0} overlay_0-1188:{mountpoint:/var/lib/containers/storage/overlay/a5ad2a30ed4b12ab5c87be3469c8c77908948e89eb5eac417a41deaf4604f41f/merged major:0 minor:1188 fsType:overlay blockSize:0} overlay_0-1200:{mountpoint:/var/lib/containers/storage/overlay/30320ac28f327009e8b2a42753130d043e859d6994d9621eaab279ea1df6fed6/merged major:0 minor:1200 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/0ca4ad78ef77c9876e9ba7d4f53af73e12dc4f6a33819e18b6fa3b342ae84964/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/94a10454272b1374bad59efb8e08071378d3ae1cfaff5f38397c6e8d00e23d2b/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/c9dc2349eab29437f1d190c4e406e8ac1fb58cc9f8a4d0d827396ef50fdc0543/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/30a16998d429daeee925269ae652616a9286fd67162e29bcf52d3591b3a919ec/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/a609cbf49b05793501334970b634181731726727a2be8ac40e840fa381fde4a4/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/483ca6331dfceeca3c53378710fc64c3ae066d032338fe7e2583f4f5dc30d56d/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/a35258cfdee261463788e0d8158218940fcfdbddd4b3a8a9ac69e8688c01b1aa/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/0370b25f6c02e55181a037c8c23798d3570bf687d9826f02566d4c4fcf785e0b/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/3d35a7d690d67aa08a8a076d38aad49ee565e49d66e6db5cd593f91d2cef05c0/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/3d89b949d97e8f8a76c26a35724e055ed20417d01f007e60d589b2a478d468e7/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/4fa0c22f6f71e653df031908b17fdda418def3fa41d094127c5cdca22dd8bb5b/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/bd10125f36aa87c481bcf61c3e8c69d0740d9e3f22c57bd88f9147c4004a6798/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/8186d0483e260834938131010c021065e4c2556b963c0725d866e69b1b168c99/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/8792969b3f4c88f7923b88123140f3e458eaedc78a146ad6ba1be2364e5ad78c/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/2a518e72fa4a3c5d6699da16662bef60401b0f420e71a058b17441be9bc7acf9/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/76bc22d5d9e7495282be80f4f18b82d67f8acf877292fe52cddf77190f536890/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/f1d960b1e2deb9b29be0d38b177e6ca8100005c9f5c95802b689a7bb842e7989/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/3de24a697ad82e9f43dbb171323df53e5d2166569d8c9963d51a509832ad7955/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/7252e34a64d1355c90abc39f6e1cafe9c269d2b2c86e50a831a2b6026a1babb6/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/0e972913ba2ce3b62f4fd2363580f19c71de5636991519a0e86e160f4e592110/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/7b2dfeb16758221bee63cf6ec35b4c966cc29a93a3b1180716438ed3bd48a829/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/36725910f64ef7e4cc68138aa0cb6bf3441b893d0921d893a70b8f66edc8e70c/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/5dd09a6ae86e298977b5cf52d241380a6376cb00b618386175c37fdc9785b772/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/0546f88f594fe7399a103ff825f9838c0d797481d58a74579fc98afac0617e86/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/a02bd99cb03a0c5e9cc1c9636a5837ff64d060d5a0ca65b8d6dbe02cdf131685/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/2e3a36452524cda1396d311f3234425b1c501cca8e93a7281714792a03e4e911/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/5b404395730372dc8eee0f86ee27722c0b1f983789c5c3f8f82adff3c7f4b7f6/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/3b388c0cfc629303cf7b89af2b5de30681d039acfe0c2e900c55c34803f7bed9/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/c33692609526097a80e1d5ebc9f8e46f0e295a8a3eae9c2cb19f26a75a4a8425/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/819b8f52ca1d933aa06e8eba07158c804e060f60418ec93b546bbf259cb758f9/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/097fe0d30b1afefa04a4c1be42cbdf01a3dcb70f68608a39b6d51b76fc3a78ed/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/e18b5c92bd9a98e649424f68f6809f0fb5a6a7be3827706042e3208ed8da5e78/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/7b22ae356ce0fc905cdf8afe0447686cce8baf065a997dfa7fddfd54f3ead2ba/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/b70b324fbd42de9322a7651f61920739f7766d958b642a542d2b431bc14c5348/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/3fc9d828bcfe58bdb5d34647b96370b50c5168d0650c6780114f6b6871d30522/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/88d0658b3e45171d064c23422aedd4b8784fcae2aacae30abd52fc63dc2b1bd6/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/ea7b1266f244259b813cef08c4400b2b7f8b932341feec5d1bb16e811120c83f/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/3220b941ed4154add6d3084787d3de0f7a524f8abcc516d29d70165ea73e70a2/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/020da19d9ff3a7d0436d4e1fb8517fca863914e3f111ed7dc803d79693389156/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/0df6cb1cec049d087a990ba9c1caf2f8352b34be7955b9e3540b546d4c5ac551/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/dc240a9fb54f3c20270dfa45b1a65b5f6bb730eb451453995684581eba7bdf7c/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/9c0af24c7d288dd308c133c5f2f3a0f443572e8a8112ba43e4067a1f044f9dc5/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/f69d4d0aeeb2d985f9381474742979d216c93d1f09e99d2dedc3a7f344eb4007/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/b1a32f0347ecf624bb3b076971af6f6c2f1f4afc95c6c50589c885700d5875b2/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/fdc5f7800f35535d1a15d56fa66f101c53fe9047a83deebde36b88d9af2649c0/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/199cd87eec41e5881b96477b46e55c59606d551c8b6973d3af14e100459c63f9/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/5eb0e3707e230b3bd670e01e04f242ec6708fe9735da12db0967991f770d89b5/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/971d2eb7f60ca04da3187f084393c691e847cc3ac01535bbc6ce9e18ef6c18e2/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/db0db1e468db455cad0e85ed30adab1a0fd6292877927ba91e7f963fd9a2e126/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/8f0b66dbbcb6d235f3d6fc50f865e8a5823e20173688387a01f35137c7ffd173/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/920aae939b41dbf30d46222df1cd68df00c530220b5dab73b2f275427ed87abb/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/e8af921bbd12faf565f0c9d4605c8c2611ea85b17fac0971d11293016e13418f/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/d780f09d8d3988c01310f1ba4ad035c1abf00ec709b29dbdb588d3a6a073dad3/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/5c825db4718b74f7edf097c738c26bb7ad401d8167a3d6e4fc3a2ab242630664/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/81dff65e037ea96d993ebbd2d24349ce54b104a061d7493425ff4d1db4390a43/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/843d5085cb0b6503850224ef9c72bc5a8dc063fac9e5857370cd90f7e99f87cf/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/2c3d273f9a1a38f8f5e336ee444c8ef5f1806eb6fce52950070bf70b8e2b5611/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-374:{mountpoint:/var/lib/containers/storage/overlay/25487d2146b60ee3a674ccd739e06ba36775591c96fa343dc640f9b80482629a/merged major:0 minor:374 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/eed9d52c8492600f29af028cd0751ece0602187969daffa8b3ab90ca7e185654/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-392:{mountpoint:/var/lib/containers/storage/overlay/86931eea3737b191523aa0a6ba5091795fa44e4ab62ca3094473fc1b2b97ba9c/merged major:0 minor:392 fsType:overlay blockSize:0} overlay_0-393:{mountpoint:/var/lib/containers/storage/overlay/83de03fb72bb3388e72fdd83be268f2c6b21066f2384794b8cfe43cd7c477159/merged major:0 minor:393 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/7a48e3b6bf4e747b1ee7a78d95057f8b1ec69ef4f4b5fc2076a0024847453e2e/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/d368daab29312622c3d44202b80274c283a93bd8e793462a54f861e4a4ead75e/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/499a71c982f8bd847ccff24ddc649eef81f9ac399f8b9746e9c7e6fb0f26beac/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/61f665414fd1bb422ce0fac999a306e189c735b78e00bd538d3aa8a0ad2123d1/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-422:{mountpoint:/var/lib/containers/storage/overlay/428249807a658d361d9490bb18a34506327c17ff005d77e88aae7d66a1be0a98/merged major:0 minor:422 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/560e4f3d975d649c483061169419f3ffe9a787ca6d59b8a23410029e203a2517/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/62d75696c6dcd0e55549484bd05450e7b1a1e51100c02d558b6a874e8a63aabd/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-433:{mountpoint:/var/lib/containers/storage/overlay/4a16ee048c83e4b28e6248edeff564b689f274f9c7d1888dfda33ae061e0b859/merged major:0 minor:433 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/0e41cf7a64497235349ecfc44c892530f037dc1d6f8fc4e259b38228208ee655/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/7e670c9bb28929c64ede14d235165e3d7b2f2241f3545e60b0af8bfa3406721e/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/ae8cc82047bbdd98c0de2bc3733e07fcc2000705d2881c2a109a0baff1d93b15/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/6b3477ebc149568d55886134276e00f01b7134ff394c086edf9afca555b62d44/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-475:{mountpoint:/var/lib/containers/storage/overlay/6faaad979b34b4233eca75d8a388e3095bd99bc8c8900b27996d9b81f26c2535/merged major:0 minor:475 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/207332ee743a60ce32b960ff4e5cc5656d3162f2a586d6d83715725e4febf570/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-485:{mountpoint:/var/lib/containers/storage/overlay/fd8bb7fb70faefc2c3a28c9884225087fb652b0e5d203cbd8d7ecc0384154d31/merged major:0 minor:485 fsType:overlay blockSize:0} overlay_0-488:{mountpoint:/var/lib/containers/storage/overlay/109a5df1faa207a7b57f224dfa021090e48b7916987a0341ef6fea4b039c19be/merged major:0 minor:488 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/4e6b4c44fa9d54c25ab72d994ace3de7037888d04bec3e081eb859653be13b48/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/a28ed93bbf1e32de50947d4901bac89f05686d0fab35394e2ce901d0afdc4766/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/93e321f0758c426ac0281989ea607cfcff9054060bbb9fdcea19ec5058159d21/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-546:{mountpoint:/var/lib/containers/storage/overlay/113fd32d7f013187e8f6d49436700b3318ace39ba7e2ba660241ae60b123bc43/merged major:0 minor:546 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/a60f122c5b7d14cc0886a9cfe8f29a9eaec05a54ed76e32b2f2ba6628c0f5b04/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/a2dc83a90ad54283c0db4153f6acd5025b5f37c4a81b3a49512a06415f6905ba/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/5d14aea79a933e9d300c4246215e7fda87496a1c08007f969f90bfc0a2046ef0/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-555:{mountpoint:/var/lib/containers/storage/overlay/0a779af953bb6ef1328b8eacec15d80701b4d26db6d8220f0983df62e6680a30/merged major:0 minor:555 fsType:overlay blockSize:0} overlay_0-560:{mountpoint:/var/lib/containers/storage/overlay/3fc0977e2f66fc1aa54032e4056ea941e58e58f2c0774089c926477765b5f9a5/merged major:0 minor:560 fsType:overlay blockSize:0} overlay_0-566:{mountpoint:/var/lib/containers/storage/overlay/12903bd2ecea8fc93726b007b1a77caf82a84652b0a281ea927a5045b47e8c54/merged major:0 minor:566 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/fbd750c6ac932a49f2503e97e618ade12b80231e691a5621121fc1871bef0102/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/398ee7fefea407e1cb427338c64af85853cdbc2a85a14a9f23b1dd5008676342/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/2f8291c0839f28f993168ffb1cdcd9f2181a076f336e5b72524abff2b8cfc443/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-593:{mountpoint:/var/lib/containers/storage/overlay/b0fa87d704779ae7eb5641d99dd8e0c755a429f07a3bc2ff3cd2969476fd4f1a/merged major:0 minor:593 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/e1dc9e3f2821a3cef0ab2022e7566a36b8f7e5d3a38d015bc0f692cb5d3ea820/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/0e61abd75bfac0bd0839af94a595e87decff954a47921532ee73457d400a79b7/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-609:{mountpoint:/var/lib/containers/storage/overlay/ddfde7e5b4327447078d3f5319ac13e46ea54c74525711f695683ac0cd2dc4fe/merged major:0 minor:609 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/bb8765740109f9fe71476f82fc52b74a256bec5d82f41d1d5e9cdd79bb7f8c9c/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/1184422fefb8b4998c66b8f7b812e16e4ed7c91dc1ce7e052f3bcb9502259737/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/833cb12257f96cf06cc7e579fe056c2a4338606e88cb811fbd1fc38222842c0f/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-634:{mountpoint:/var/lib/containers/storage/overlay/b41a861187f5f0079cbec462e5b9b86b826d78431601c7d4b19ded6a54e375ec/merged major:0 minor:634 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/01bbe6fa11a4d8115c4f256b82c3a83c2fa4854e51c2806bf64d400f745d9f2f/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-640:{mountpoint:/var/lib/containers/storage/overlay/24a34c04be6210c0f0e6c1a8757d112c228930f9c54def50e29888e40e3e2928/merged major:0 minor:640 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/c92b479c354f7eec8008bad40f3d22e62a2f701166f1eaf6a188ed526683ca7d/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/4b73f87b1d9bdfd3f8cb84457acafd31fdef90fcd74ebbab9716f57c7355b08a/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/1ee52b686831f4309c61a6e836f1fad38b510e8f556188345d3ef0771ee6aaf8/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/acc5ddd1bcf79c79e1ffe4acd2201915bb86af6be476bc3ab71c0abc7c41ac77/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/7ae6422ad846279097d40397299aa4a6f68c5f99b35909c252fd064a09fdd8d5/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-678:{mountpoint:/var/lib/containers/storage/overlay/9a04b493a38a5750ce8b65c3ef1235d0a33f6eb8b98519f911096d6745a34933/merged major:0 minor:678 fsType:overlay blockSize:0} overlay_0-680:{mountpoint:/var/lib/containers/storage/overlay/0bcf6ec75b14b98fe51c76235a8226545347558eb40c2f659167784b3c8cee00/merged major:0 minor:680 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/6ebe7acf678e6d1178d0e528e18f951aab5ae244f0eb49fab649213b1ae58538/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/e3593f4db0c7434e57f747285629766d29b2ed9b5a9848ced54a3881706da6f7/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/cbb26c337426c36a0cb4de6d1a6aed3d1341e5179903f26501e477c5364009d5/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-699:{mountpoint:/var/lib/containers/storage/overlay/dd25ae42b0b8a2b151edb0ae8e7a6e4734d5a33b4fa79201863f553795cda70e/merged major:0 minor:699 fsType:overlay blockSize:0} overlay_0-707:{mountpoint:/var/lib/containers/storage/overlay/bdc8812d5e500d425f82dd481a94d435e68d83b7b5d2abeaefb1719ed22e8f8e/merged major:0 minor:707 fsType:overlay blockSize:0} overlay_0-708:{mountpoint:/var/lib/containers/storage/overlay/49d3b144fae3b777c6e12094582e4be2f81490093c8fe45acee32130a7904bff/merged major:0 minor:708 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/95d2473e0d2585c6bf22b04cfb0e474c1fd42db9b107561ccf819a78cdf100ab/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/c31932773b6585e0f52302e7731f0b2458075cd9a6fd9b97ab505e337fdb0a4c/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/88ebbbc103124a03b49c22430a316fe3a28719555c58f5f02631fa59f8d02c2f/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/f786fd63f3916c0c76eedbb98049d2e14de800f22506eb355f46872ef622b829/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-721:{mountpoint:/var/lib/containers/storage/overlay/dd7a9913d0e885f2a01661f32b3d78dc01d854c55cc8acbb58f5d3a624f9b9c9/merged major:0 minor:721 fsType:overlay blockSize:0} overlay_0-723:{mountpoint:/var/lib/containers/storage/overlay/b98937af010dfa1cbc85370e45de7e18290885c587caca137a572748af248942/merged major:0 minor:723 fsType:overlay blockSize:0} overlay_0-727:{mountpoint:/var/lib/containers/storage/overlay/4b4d338c9167df49f972f4e388b73cf775df8e22848f28cc7b879655afba4028/merged major:0 minor:727 fsType:overlay blockSize:0} overlay_0-736:{mountpoint:/var/lib/containers/storage/overlay/cd3a2866cd3bb49b205ad45106fc27ca0f1a592920741077d0a9022c29d2b485/merged major:0 minor:736 fsType:overlay blockSize:0} overlay_0-738:{mountpoint:/var/lib/containers/storage/overlay/1a93548c9f4813795bb1b0c1d861ac434d57cecc402e0b1a50ded5143eb929e9/merged major:0 minor:738 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/eef7736205bb4b0856bf85790d505203654ff3518e2d394de7323ecdfc3bf13c/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-740:{mountpoint:/var/lib/containers/storage/overlay/cda25b6717649eb317a2719ba50a92913fd3ae9539542d324e6047c90352c8a1/merged major:0 minor:740 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/414dfdb7eb204012ae055ea40c7140d10470e37ad8fbfb7f0b1b4efb2a57c216/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/3837999977a2445143e641791a309bccffb974fb808f5589950d3212b1eeec54/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-754:{mountpoint:/var/lib/containers/storage/overlay/17660ef60fe6cf9b69e89ad201e0ab540228cdf3d90fee2d01ac5bcc39fa12d6/merged major:0 minor:754 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/963c9b7305c647b9ee1fd3a593c583a0f9442afc20dfb0e91ece0f338e1c3f79/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-759:{mountpoint:/var/lib/containers/storage/overlay/1c7a823b6b45c6fafd1d9ba7b6e3b98997c7810c8d05bacf92fc2b52d41621b6/merged major:0 minor:759 fsType:overlay blockSize:0} overlay_0-761:{mountpoint:/var/lib/containers/storage/overlay/a406a9189d7329a7e9d48892513682def2f42ae8f102f18037a7c0dc96891955/merged major:0 minor:761 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/63949079358f33a7a8d4fe6ecabf62cf12bc85d457b1182354ef1d20a9ae6d06/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/c288b55a7489ae976ea0dbbedb87eac9298846aa68b153829abcd8d4d7b78ff1/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/98c9b13d44a798397d48bf13e9b5bf05bc20f8db726eb45fe3a04eecc1c30cd1/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-781:{mountpoint:/var/lib/containers/storage/overlay/960aa72776ab9dbbbfb72863fdc87302bf4a5940328a56f4f4dcda4ee49114ee/merged major:0 minor:781 fsType:overlay blockSize:0} overlay_0-786:{mountpoint:/var/lib/containers/storage/overlay/b65100cf6f6429580b54ef51449e6858f818f769b3dd6992ee3a557de35ce768/merged major:0 minor:786 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/ab04179fe207d12ff02de758445b06a0d5279cb707e97152d51f54850f6b1081/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/1db0439de635f43f6b4e51afa713ecad1983d2c13ff61b757651217be4dc0961/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-804:{mountpoint:/var/lib/containers/storage/overlay/79569ce80d4f1dc85a29875d50c3fc5e0796 Mar 08 22:13:50.839093 master-0 kubenswrapper[29458]: cad962e13ec181bff421e7671ec4/merged major:0 minor:804 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/d907381962fc0a24dd981250462ada5b832fc9d9eabf871f07eebb2c6adf551b/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/5758c71fde550aad148a66acb777217de47fdacb8b918bf6381fa9daaf90cf06/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-843:{mountpoint:/var/lib/containers/storage/overlay/b0ce04f3a55a515121a14831142d04a99b8222f9aad1ad26da9f60e6a6d0fc28/merged major:0 minor:843 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/d037c0eaa3a1f8dd016f3947c071ceb8000cb01d2765cefd833e83fa92357c97/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/468a2af4f7b2fb75149cf79198b5d515582aa16bef25e8ab291915f5c811b292/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/267b76180b415c7033dbf0a42db401d94719dd6909f38f0777683dbc13fed7fc/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-857:{mountpoint:/var/lib/containers/storage/overlay/54e24bf4e5fe49e2068aa971b8bf67cccb186c5008f9606f36a892518e1aea65/merged major:0 minor:857 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/6fe68ab2c0dfe30cfab95d11e4f98da81af39507ed198fafeb52ebd93156f879/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-861:{mountpoint:/var/lib/containers/storage/overlay/104b74ce2ebc1411b7eb125a1d6113001d15ba8928941bda58c5f3273ed4a6a8/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/b31416971ac9a5cd1309836cd6bf9cfa7408840d2285b63726cf51151cf40bfb/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/f32fd702fd9a63afab14d1dbfeaea5650ca8862e7a9541cdf63721b82d2b6f50/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/2fa5ac8e8bdec449732df0e09af7b04a0f37690cefd61406c70b248bb7dda97d/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-884:{mountpoint:/var/lib/containers/storage/overlay/9f69c644079061c45107f542d93791bd0618fe573aff2d6143e1b0d5346dcc9b/merged major:0 minor:884 fsType:overlay blockSize:0} overlay_0-889:{mountpoint:/var/lib/containers/storage/overlay/c86b5f1606fedd71ad4a11097a5843d099d090afd2a2657e433db662f81faf6a/merged major:0 minor:889 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/c5b46284db82e0eeffeb853cce828af5eef75aa87e72f0e2525c932465201a4d/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/f06101b1f3a6cd36004d3b1bd2d43a31cf9b240e9eee3ee5300ab041002aa6f8/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/73c8d95dc263ebbf190de256df392473e18676a5294e2e6123ba551318e29e91/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-908:{mountpoint:/var/lib/containers/storage/overlay/b3e715c938c1125e42944e1aa88114dac18a43e428986742d08f6717fcc4a186/merged major:0 minor:908 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/4ce2dc2e039e436e7754efdace92f1146ee21dac79083db89d4c19a37088aa73/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-912:{mountpoint:/var/lib/containers/storage/overlay/0dab30a9c19e816477af6bf9e017d1147bfe7e8df4e67cfe178de9b4afab4775/merged major:0 minor:912 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/53109beb93f8a1d583c24034d0bdc3d3668891b735861d1820e32ddf222df868/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/a6d5f80d93683d24b9dc58b02ba4b6db6e848d4502a58994060cacb57ea5b1e4/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-935:{mountpoint:/var/lib/containers/storage/overlay/fb8f2ccffc3de57ff9802f82923c62988d6e9dd58119a9f0a77b7285315d06f1/merged major:0 minor:935 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/22fa4b0a721c24ccfe9be9bd10756aa7ca2ad001760d0007b3f82626af4ed24a/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/92c1884141f664b92f67c56e859976403736c8a6a5c164290863ad83f8b06b21/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-953:{mountpoint:/var/lib/containers/storage/overlay/a11a8c9fc98ead7adc449b0b6b683aab99675f5707875dd6f56dfdb62d951dd3/merged major:0 minor:953 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/36129e3a4d4cd32ca386e937b9a56eaac7a16881af93831db306cbf27a75a919/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-963:{mountpoint:/var/lib/containers/storage/overlay/8164ce74b0830b2829492a109adf3c8e9ee42b6390fcaaeba4ff8971d24f2ed6/merged major:0 minor:963 fsType:overlay blockSize:0} overlay_0-980:{mountpoint:/var/lib/containers/storage/overlay/4002cdeb1d428fd91af219384e9d47852b76c11c402d3f3d1aefdcd43c3e4034/merged major:0 minor:980 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/74388dcfb29acf31e6144ef1b6dc36c1725919391f0778277ba898a1024dc0c5/merged major:0 minor:984 fsType:overlay blockSize:0}] Mar 08 22:13:50.897964 master-0 kubenswrapper[29458]: I0308 22:13:50.893273 29458 manager.go:217] Machine: {Timestamp:2026-03-08 22:13:50.892536937 +0000 UTC m=+0.180594549 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:60bd3117f077456eaef79571349311b3 SystemUUID:60bd3117-f077-456e-aef7-9571349311b3 BootID:6ad049a3-699b-4e1d-9b55-0bbdfa29d597 Filesystems:[{Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d/userdata/shm DeviceMajor:0 DeviceMinor:1182 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:462 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:544 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1200 DeviceMajor:0 DeviceMinor:1200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4eec590b-c536-4b16-a664-81bc3c74eef5/volumes/kubernetes.io~projected/kube-api-access-k67bc DeviceMajor:0 DeviceMinor:308 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a3c825039f429bbbe3e7e27ef1491ff9c435ad7f4d68ed1d1f7b0b138f9a2544/userdata/shm DeviceMajor:0 DeviceMinor:839 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab/userdata/shm DeviceMajor:0 DeviceMinor:841 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65b211739156dcea6c9fedd48dbe1e6cb8361762b8f9a787cf0192fa0b5059a7/userdata/shm DeviceMajor:0 DeviceMinor:459 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3f24d18018ae4fd0cde9a9605ef8a24287eac4d74c241af3ae19429f61d0495/userdata/shm DeviceMajor:0 DeviceMinor:716 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:961 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dcce2795ffc43a6cd86e6b9ec76eb643d8b1c22dbdc50b3b5ab3767ff2108c08/userdata/shm DeviceMajor:0 DeviceMinor:1078 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~projected/kube-api-access-7z7fx DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:562 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-889 DeviceMajor:0 DeviceMinor:889 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~projected/kube-api-access-qdz7m DeviceMajor:0 DeviceMinor:1032 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~projected/kube-api-access-4dr4p DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7/userdata/shm DeviceMajor:0 DeviceMinor:834 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:503 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a/userdata/shm DeviceMajor:0 DeviceMinor:972 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~projected/kube-api-access-9l82d DeviceMajor:0 DeviceMinor:1129 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:863 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5f486dd57f083148217b384b5e4b7e4ee2cd439fe07291b198c3cd32fbe85ef/userdata/shm DeviceMajor:0 DeviceMinor:726 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:881 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0cb21214-292a-48ee-85e2-6b1e62f40cb4/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:670 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-738 DeviceMajor:0 DeviceMinor:738 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-488 DeviceMajor:0 DeviceMinor:488 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes/kubernetes.io~projected/kube-api-access-7v6dc DeviceMajor:0 DeviceMinor:563 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-908 DeviceMajor:0 DeviceMinor:908 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1046 DeviceMajor:0 DeviceMinor:1046 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1179 DeviceMajor:0 DeviceMinor:1179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1070 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/270111bd9a880fa859abff7a300a5a42546d0f86314f375208a892a811a648e7/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a/userdata/shm DeviceMajor:0 DeviceMinor:506 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-560 DeviceMajor:0 DeviceMinor:560 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:690 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-699 DeviceMajor:0 DeviceMinor:699 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~projected/kube-api-access-hq7xb DeviceMajor:0 DeviceMinor:962 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/385e69e4-d443-44bb-8ee4-578a1c902c62/volumes/kubernetes.io~projected/kube-api-access-vxg7t DeviceMajor:0 DeviceMinor:105 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1142 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~projected/kube-api-access-jjt52 DeviceMajor:0 DeviceMinor:267 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a913c639-ebfc-42a3-85cd-8a460027d3ec/volumes/kubernetes.io~projected/kube-api-access-drcp8 DeviceMajor:0 DeviceMinor:252 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687/userdata/shm DeviceMajor:0 DeviceMinor:510 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:545 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3e38e989-41b8-4c80-99fb-8d414dda5da1/volumes/kubernetes.io~projected/kube-api-access-jp86m DeviceMajor:0 DeviceMinor:802 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1069 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~projected/kube-api-access-tv57k DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-721 DeviceMajor:0 DeviceMinor:721 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79/userdata/shm DeviceMajor:0 DeviceMinor:871 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-727 DeviceMajor:0 DeviceMinor:727 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669/userdata/shm DeviceMajor:0 DeviceMinor:446 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e8ef68b9-6f8d-4697-b269-91ee4e310752/volumes/kubernetes.io~projected/kube-api-access-6ht4t DeviceMajor:0 DeviceMinor:456 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d2b94760fb5bd6c1ac833545141ede88958ba2ac4b1af0ff830a401107ab2f9/userdata/shm DeviceMajor:0 DeviceMinor:511 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b/volumes/kubernetes.io~projected/kube-api-access-w5t9m DeviceMajor:0 DeviceMinor:867 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:654 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-593 DeviceMajor:0 DeviceMinor:593 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-708 DeviceMajor:0 DeviceMinor:708 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9fe466f-5a23-4f69-8a96-44bd5d6194f5/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:904 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461/userdata/shm DeviceMajor:0 DeviceMinor:445 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1101 DeviceMajor:0 DeviceMinor:1101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0/volumes/kubernetes.io~projected/kube-api-access-jb2lv DeviceMajor:0 DeviceMinor:673 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1012 DeviceMajor:0 DeviceMinor:1012 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1184 DeviceMajor:0 DeviceMinor:1184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923/userdata/shm DeviceMajor:0 DeviceMinor:606 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876/userdata/shm DeviceMajor:0 DeviceMinor:59 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-754 DeviceMajor:0 DeviceMinor:754 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ef14467-bb62-462d-9dec-dee43e4cc9bd/volumes/kubernetes.io~projected/kube-api-access-6tfdv DeviceMajor:0 DeviceMinor:648 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:444 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~projected/kube-api-access-j9c64 DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0de0dd88c4bba9f852c91550e6622cdfe9b4a30a405c23edc2a915817b573fec/userdata/shm DeviceMajor:0 DeviceMinor:512 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:880 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-980 DeviceMajor:0 DeviceMinor:980 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/96a67acb-9cc6-4793-b99a-01479b239d76/volumes/kubernetes.io~projected/kube-api-access-d9xj9 DeviceMajor:0 DeviceMinor:118 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-707 DeviceMajor:0 DeviceMinor:707 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fd9abe2b-f829-4376-9abe-7da0a08770e7/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:865 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-953 DeviceMajor:0 DeviceMinor:953 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e/userdata/shm DeviceMajor:0 DeviceMinor:676 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e/userdata/shm DeviceMajor:0 DeviceMinor:478 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be431b74-1116-4b0f-8b25-bbb0408411b0/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:543 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3e38e989-41b8-4c80-99fb-8d414dda5da1/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:495 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699/userdata/shm DeviceMajor:0 DeviceMinor:1167 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5acb1dbbaadd24be1aa51015d4ffabe0583806b310c9bb173c49c064dc0af3d3/userdata/shm DeviceMajor:0 DeviceMinor:477 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:803 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b6bc6f78-2c5c-4add-925f-f6568a49c2cc/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:973 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1080 DeviceMajor:0 DeviceMinor:1080 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1141 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-374 DeviceMajor:0 DeviceMinor:374 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/83b5f0b6-adee-4820-8212-b4d182b178d2/volumes/kubernetes.io~projected/kube-api-access-5pwq4 DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:466 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~projected/kube-api-access-cpxls DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:442 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~projected/kube-api-access-lhp8w DeviceMajor:0 DeviceMinor:833 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:992 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dfe625a1-5ba4-491f-9ab3-5d91154961a0/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/077643a2-ab2d-4f12-9abf-42a1c56d7aff/volumes/kubernetes.io~projected/kube-api-access-mp26r DeviceMajor:0 DeviceMinor:692 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:848 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1044 DeviceMajor:0 DeviceMinor:1044 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-884 DeviceMajor:0 DeviceMinor:884 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~projected/kube-api-access-7tlmx DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1099 DeviceMajor:0 DeviceMinor:1099 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-736 DeviceMajor:0 DeviceMinor:736 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/556cd17b0dd9a0437b38f51d3f691ed442f4e900ac26991a4d6a0e87a7a93e20/userdata/shm DeviceMajor:0 DeviceMinor:573 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/89619d97-2c16-4e76-ba80-8b519f6a9366/volumes/kubernetes.io~projected/kube-api-access-zj5rx DeviceMajor:0 DeviceMinor:653 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1006 DeviceMajor:0 DeviceMinor:1006 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d14eb63d678bcf527293b2268e60d6e7c54629d3617ad205aa85e0b95e38c0c8/userdata/shm DeviceMajor:0 DeviceMinor:507 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-761 DeviceMajor:0 DeviceMinor:761 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b358dcb7-d01f-4206-b636-b55a599a73bd/volumes/kubernetes.io~projected/kube-api-access-bmdmr DeviceMajor:0 DeviceMinor:270 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/10e2e81b-cd18-4e30-b8ad-4cf105daea4a/volumes/kubernetes.io~projected/kube-api-access-sjndf DeviceMajor:0 DeviceMinor:1004 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e34987c76ae3161515e58a685409125bb3c2f2c0b1e13425d28a3f54cc0d97c/userdata/shm DeviceMajor:0 DeviceMinor:947 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1065 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f08d60c032a49069a33366a771add75613c8b164c10de5edc94cf407f1fce2c7/userdata/shm DeviceMajor:0 DeviceMinor:868 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1186 DeviceMajor:0 DeviceMinor:1186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0641333-feda-44c5-baf5-ceee4ce3fd8f/volumes/kubernetes.io~projected/kube-api-access-784c7 DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c/userdata/shm DeviceMajor:0 DeviceMinor:447 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81/userdata/shm DeviceMajor:0 DeviceMinor:334 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:591 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf/userdata/shm DeviceMajor:0 DeviceMinor:978 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-393 DeviceMajor:0 DeviceMinor:393 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:470 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-546 DeviceMajor:0 DeviceMinor:546 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-634 DeviceMajor:0 DeviceMinor:634 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3af41e9-c604-48da-bec5-df81c2ef3374/volumes/kubernetes.io~projected/kube-api-access-z2nfk DeviceMajor:0 DeviceMinor:1071 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1128 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-422 DeviceMajor:0 DeviceMinor:422 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-555 DeviceMajor:0 DeviceMinor:555 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0/volumes/kubernetes.io~projected/kube-api-access-ff6pm DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e635b0da-956b-4636-bc9b-61f231241908/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1002 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1030 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~projected/kube-api-access-2hstt DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-723 DeviceMajor:0 DeviceMinor:723 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-566 DeviceMajor:0 DeviceMinor:566 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-963 DeviceMajor:0 DeviceMinor:963 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e/volumes/kubernetes.io~projected/kube-api-access-l5xq4 DeviceMajor:0 DeviceMinor:404 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~projected/kube-api-access-ngf2z DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~projected/kube-api-access-zl4xt DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1066 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/a8e00c74-fb72-4e3d-a22c-c38a4772a813/volumes/kubernetes.io~projected/kube-api-access-gwqqw DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-935 DeviceMajor:0 DeviceMinor:935 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8/userdata/shm DeviceMajor:0 DeviceMinor:570 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes/kubernetes.io~projected/kube-api-access-clxsk DeviceMajor:0 DeviceMinor:564 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:993 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-843 DeviceMajor:0 DeviceMinor:843 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44e67e41-045e-42ef-8f60-6ef15606d6a2/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:465 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9/userdata/shm DeviceMajor:0 DeviceMinor:1073 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1118 DeviceMajor:0 DeviceMinor:1118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b5246dc-b715-4678-a3a9-878df57dd236/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:960 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d063b330-4180-43de-a248-c573183d96f1/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:970 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:461 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/546b6a60e0c7d74e50a429925cb5072388fd5ebf8c592233957d28ac0705b80e/userdata/shm DeviceMajor:0 DeviceMinor:1003 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:660 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:668 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/00db426a-15d4-4737-a85e-b4cf6362c759/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1175 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/00db426a-15d4-4737-a85e-b4cf6362c759/volumes/kubernetes.io~projected/kube-api-access-86mrp DeviceMajor:0 DeviceMinor:1181 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~projected/kube-api-access-znqrj DeviceMajor:0 DeviceMinor:887 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1060 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37bf82cb-adea-46d3-a899-136eb1d1f292/volumes/kubernetes.io~projected/kube-api-access-v6ht7 DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/940096d4a40b7dc6434a7295ac74e546aac8e0fdcf673fbbc4587227bf159807/userdata/shm DeviceMajor:0 DeviceMinor:674 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:689 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-781 DeviceMajor:0 DeviceMinor:781 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d01185-e485-4697-92c2-31a044f25d82/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2851c096-f5cb-4a46-a5a0-ac0b1341033b/volumes/kubernetes.io~projected/kube-api-access-2l47w DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013/userdata/shm DeviceMajor:0 DeviceMinor:482 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1042 DeviceMajor:0 DeviceMinor:1042 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f6fbc12f-3c27-4a7a-933f-43a55c960335/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-678 DeviceMajor:0 DeviceMinor:678 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b849f992-1020-4633-98be-75705b962fa9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:2 Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: 13 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad/volumes/kubernetes.io~projected/kube-api-access-sdfls DeviceMajor:0 DeviceMinor:655 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657/volumes/kubernetes.io~projected/kube-api-access-96gl4 DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110/userdata/shm DeviceMajor:0 DeviceMinor:479 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6eb502a1-db10-46ba-b698-461919464fb9/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:822 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0cb21214-292a-48ee-85e2-6b1e62f40cb4/volumes/kubernetes.io~projected/kube-api-access-sg2dp DeviceMajor:0 DeviceMinor:658 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19/userdata/shm DeviceMajor:0 DeviceMinor:893 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07/userdata/shm DeviceMajor:0 DeviceMinor:1035 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~projected/kube-api-access-pq2ch DeviceMajor:0 DeviceMinor:1144 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/318c84ebaf730c7c85b63db579f8af63f5545b50f015236d0cbd1a16b9495c4d/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-609 DeviceMajor:0 DeviceMinor:609 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9/userdata/shm DeviceMajor:0 DeviceMinor:850 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d063b330-4180-43de-a248-c573183d96f1/volumes/kubernetes.io~projected/kube-api-access-8v2k8 DeviceMajor:0 DeviceMinor:971 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c377685c-2024-4ef7-932d-5858eeb0d9bd/volumes/kubernetes.io~projected/kube-api-access-4z4s4 DeviceMajor:0 DeviceMinor:1067 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002/userdata/shm DeviceMajor:0 DeviceMinor:382 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-680 DeviceMajor:0 DeviceMinor:680 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/df48e7e0-0659-48e2-9b6a-32c964ff47b2/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:504 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~projected/kube-api-access-mvp5b DeviceMajor:0 DeviceMinor:669 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a/volumes/kubernetes.io~projected/kube-api-access-lpb8q DeviceMajor:0 DeviceMinor:490 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d/userdata/shm DeviceMajor:0 DeviceMinor:98 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-912 DeviceMajor:0 DeviceMinor:912 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:524 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:832 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3/volumes/kubernetes.io~projected/kube-api-access-shdtk DeviceMajor:0 DeviceMinor:946 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-485 DeviceMajor:0 DeviceMinor:485 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/971ffa86-4d52-4dc3-ba28-03d116ec3494/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968/userdata/shm DeviceMajor:0 DeviceMinor:325 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-433 DeviceMajor:0 DeviceMinor:433 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5afb146-31d7-4da9-8738-b6c15528233a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:667 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:702 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8/volumes/kubernetes.io~projected/kube-api-access-dqkp4 DeviceMajor:0 DeviceMinor:864 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf/userdata/shm DeviceMajor:0 DeviceMinor:845 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/53b5043fd325310586d0ad90805405242c17d1ce6d248bad4d8308d740dacd52/userdata/shm DeviceMajor:0 DeviceMinor:509 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1232f59f-4e6a-46ef-8bec-1bd4e04956ef/volumes/kubernetes.io~projected/kube-api-access-pcqnj DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8ef68b9-6f8d-4697-b269-91ee4e310752/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:455 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1064 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-392 DeviceMajor:0 DeviceMinor:392 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7e0267ba-5dd7-4e81-885f-95b27a7b42ea/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:464 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-759 DeviceMajor:0 DeviceMinor:759 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/volumes/kubernetes.io~projected/kube-api-access-gxxvr DeviceMajor:0 DeviceMinor:849 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3c767d6aca988650063d67045483c4316fb23551293f63bcb6227962e14fff7/userdata/shm DeviceMajor:0 DeviceMinor:1008 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/081acedd-4c88-461f-80f3-e80fdbadb725/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4382d186-34e4-40af-9b92-bb17ddcaa23f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-804 DeviceMajor:0 DeviceMinor:804 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-640 DeviceMajor:0 DeviceMinor:640 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~projected/kube-api-access-vwdhp DeviceMajor:0 DeviceMinor:255 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de89c423-0f2a-440f-9fa9-92fefea84b09/volumes/kubernetes.io~projected/kube-api-access-7h4vv DeviceMajor:0 DeviceMinor:259 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9e9c931-9595-42f1-bbc2-c412286f6cd1/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:885 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1000 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2a91f36f-900e-4b99-9be1-dfc61d8e31d9/volumes/kubernetes.io~projected/kube-api-access-ftn6p DeviceMajor:0 DeviceMinor:691 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4ef806a4-5486-43a9-8bfa-b1670c888dc1/volumes/kubernetes.io~projected/kube-api-access-qzlpq DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c901b468-b8e9-48f8-8050-0d54e24e2adb/volumes/kubernetes.io~projected/kube-api-access-hmfqq DeviceMajor:0 DeviceMinor:443 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:701 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-740 DeviceMajor:0 DeviceMinor:740 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ef14467-bb62-462d-9dec-dee43e4cc9bd/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:623 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/81f5ed55-225c-41e2-bc9d-b41063a604c9/volumes/kubernetes.io~projected/kube-api-access-7kz92 DeviceMajor:0 DeviceMinor:1001 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/077643a2-ab2d-4f12-9abf-42a1c56d7aff/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:693 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047/userdata/shm DeviceMajor:0 DeviceMinor:837 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1075 DeviceMajor:0 DeviceMinor:1075 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/345ca27a-f572-4efa-b0ce-dfa8243becd6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:379 Capacity:200003584 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:558 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-786 DeviceMajor:0 DeviceMinor:786 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ecb3134a-ff4f-4069-8817-010b400296f6/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1143 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1188 DeviceMajor:0 DeviceMinor:1188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0269ed52-a753-49aa-9c38-c7aee23cebbd/volumes/kubernetes.io~projected/kube-api-access-8fp4g DeviceMajor:0 DeviceMinor:1068 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:505 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/66e50eed-e3ac-431f-931b-7c4c848c491b/volumes/kubernetes.io~projected/kube-api-access-bjrqj DeviceMajor:0 DeviceMinor:611 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e/userdata/shm DeviceMajor:0 DeviceMinor:1072 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a21e2296-10cb-4c70-ac3e-2173d35faac4/volumes/kubernetes.io~projected/kube-api-access-7xcbb DeviceMajor:0 DeviceMinor:95 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16/userdata/shm DeviceMajor:0 DeviceMinor:407 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~projected/kube-api-access-5jwf9 DeviceMajor:0 DeviceMinor:559 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7/userdata/shm DeviceMajor:0 DeviceMinor:756 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3fbcd83-a3e1-4de1-aceb-2692d348e495/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:557 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d/userdata/shm DeviceMajor:0 DeviceMinor:874 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/66e50eed-e3ac-431f-931b-7c4c848c491b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:580 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-475 DeviceMajor:0 DeviceMinor:475 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d851f97-b21e-432e-a4c3-dc0a8ff00e84/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6eb502a1-db10-46ba-b698-461919464fb9/volumes/kubernetes.io~projected/kube-api-access-sjlqz DeviceMajor:0 DeviceMinor:808 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04fb7bdb-fb5a-4187-94a3-67c8f09684ed/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:248 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fd9abe2b-f829-4376-9abe-7da0a08770e7/volumes/kubernetes.io~projected/kube-api-access-vxssr DeviceMajor:0 DeviceMinor:866 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b1207b6b-0517-46eb-9953-737f2bf1040d/volumes/kubernetes.io~projected/kube-api-access-d2lsl DeviceMajor:0 DeviceMinor:326 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-857 DeviceMajor:0 DeviceMinor:857 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1146 DeviceMajor:0 DeviceMinor:1146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9fe466f-5a23-4f69-8a96-44bd5d6194f5/volumes/kubernetes.io~projected/kube-api-access-nvmk7 DeviceMajor:0 DeviceMinor:907 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c50be0fc3f4780032df6f771d4507e5bf45df79f6025c39b105620c89303b83/userdata/shm DeviceMajor:0 DeviceMinor:1010 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1170 DeviceMajor:0 DeviceMinor:1170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d0feb73-2ef6-4083-81ce-82a1394ce9c4/volumes/kubernetes.io~projected/kube-api-access-jfpt7 DeviceMajor:0 DeviceMinor:437 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573/userdata/shm DeviceMajor:0 DeviceMinor:671 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f/userdata/shm DeviceMajor:0 DeviceMinor:481 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b6bc6f78-2c5c-4add-925f-f6568a49c2cc/volumes/kubernetes.io~projected/kube-api-access-c52wj DeviceMajor:0 DeviceMinor:977 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8a7e92d4-b7ed-408b-b7cf-00209a627bea/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1031 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:03e24173b288bd9 MacAddress:0e:2f:2d:4e:97:ba Speed:10000 Mtu:8900} {Name:0c50be0fc3f4780 MacAddress:b6:26:36:d3:1b:e0 Speed:10000 Mtu:8900} {Name:0de0dd88c4bba9f MacAddress:26:6c:75:f1:31:79 Speed:10000 Mtu:8900} {Name:128b0bbce116750 MacAddress:8e:ec:74:a1:53:37 Speed:10000 Mtu:8900} {Name:1760bfc2a8a6cbf MacAddress:be:bd:24:65:7b:70 Speed:10000 Mtu:8900} {Name:1794b122d487b56 MacAddress:d2:35:24:27:89:1a Speed:10000 Mtu:8900} {Name:1d036d34fc0a965 MacAddress:b6:87:0b:d8:ab:22 Speed:10000 Mtu:8900} {Name:3115bea19c7db25 MacAddress:56:b8:4b:1e:4b:d3 Speed:10000 Mtu:8900} {Name:362c3b514579828 MacAddress:52:b0:d2:16:a1:0c Speed:10000 Mtu:8900} {Name:39ad18e2cdc2213 MacAddress:26:ff:39:9f:b1:0d Speed:10000 Mtu:8900} {Name:3ad163e6ddc790c MacAddress:be:1d:e6:79:86:29 Speed:10000 Mtu:8900} {Name:3bc807693a5d485 MacAddress:9e:60:b2:2d:c7:c3 Speed:10000 Mtu:8900} {Name:409ed7dd551984c MacAddress:96:1e:5e:b8:35:22 Speed:10000 Mtu:8900} {Name:41f9b34125839a0 MacAddress:e6:71:1d:29:9c:41 Speed:10000 Mtu:8900} {Name:427fdbe110b0876 MacAddress:22:0a:9d:d1:46:0d Speed:10000 Mtu:8900} {Name:44b935a06c24e92 MacAddress:66:9f:b2:60:92:6e Speed:10000 Mtu:8900} {Name:44c8fec7b12dde9 MacAddress:16:2b:d0:c0:12:57 Speed:10000 Mtu:8900} {Name:46be7c8523987b3 MacAddress:6a:b1:aa:38:7a:a5 Speed:10000 Mtu:8900} {Name:49a678c1404278a MacAddress:66:10:66:bb:cd:67 Speed:10000 Mtu:8900} {Name:503b7b6ea77465c MacAddress:5a:69:ab:29:33:7b Speed:10000 Mtu:8900} {Name:53b5043fd325310 MacAddress:d2:a5:4b:0b:84:af Speed:10000 Mtu:8900} {Name:556cd17b0dd9a04 MacAddress:42:df:bc:21:ca:72 Speed:10000 Mtu:8900} {Name:5acb1dbbaadd24b MacAddress:ca:53:f4:13:4e:8f Speed:10000 Mtu:8900} {Name:5d5dc92efde818d MacAddress:92:12:ec:bf:19:66 Speed:10000 Mtu:8900} {Name:5e269b66a082f29 MacAddress:ca:a1:1c:c4:b1:c0 Speed:10000 Mtu:8900} {Name:5e6100d027b8583 MacAddress:96:36:1b:16:d9:0e Speed:10000 Mtu:8900} {Name:60db7aa4fe5c30f MacAddress:ca:d9:b0:b6:c2:17 Speed:10000 Mtu:8900} {Name:65b211739156dce MacAddress:5a:7e:f2:fe:52:53 Speed:10000 Mtu:8900} {Name:6798958131d9b61 MacAddress:da:a6:8d:8a:0b:cd Speed:10000 Mtu:8900} {Name:67cd73a40904f0f MacAddress:a6:08:d9:67:c7:0f Speed:10000 Mtu:8900} {Name:6a34c2634ae54a6 MacAddress:be:e8:e1:6c:2d:df Speed:10000 Mtu:8900} {Name:6b55e765e348290 MacAddress:9e:47:74:fd:dc:ad Speed:10000 Mtu:8900} {Name:75ac8242dd3ac65 MacAddress:7e:23:f0:b4:d4:5b Speed:10000 Mtu:8900} {Name:940096d4a40b7dc MacAddress:06:2f:ba:bb:f3:10 Speed:10000 Mtu:8900} {Name:9d2b94760fb5bd6 MacAddress:d6:d2:93:e8:4d:34 Speed:10000 Mtu:8900} {Name:9d44f96a87d3e5a MacAddress:5e:63:cf:f1:9e:66 Speed:10000 Mtu:8900} {Name:a3c825039f429bb MacAddress:e2:4f:cb:a7:28:c4 Speed:10000 Mtu:8900} {Name:a5f486dd57f0831 MacAddress:42:dd:b7:e2:6d:73 Speed:10000 Mtu:8900} {Name:b606b54eb942579 MacAddress:ea:65:36:f9:3d:34 Speed:10000 Mtu:8900} {Name:be53893516c99fb MacAddress:7a:ee:65:8e:6b:ae Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:72:f3:51:ba:cd:dc Speed:0 Mtu:8900} {Name:c3c767d6aca9886 MacAddress:d6:68:99:c3:43:83 Speed:10000 Mtu:8900} {Name:d06c21917a01888 MacAddress:7e:e3:d8:59:cc:73 Speed:10000 Mtu:8900} {Name:d14eb63d678bcf5 MacAddress:de:49:c4:47:a1:57 Speed:10000 Mtu:8900} {Name:d186c173d59660d MacAddress:b2:d9:8e:ba:bb:2c Speed:10000 Mtu:8900} {Name:d2fca6e62ae89a9 MacAddress:6a:e6:07:8e:47:92 Speed:10000 Mtu:8900} {Name:da21a3ee43c3a1c MacAddress:be:44:59:05:3b:f7 Speed:10000 Mtu:8900} {Name:dc168342b2accc2 MacAddress:6e:99:fb:ce:03:21 Speed:10000 Mtu:8900} {Name:dcce2795ffc43a6 MacAddress:1e:b3:7f:72:f3:38 Speed:10000 Mtu:8900} {Name:de7e09860c85ea2 MacAddress:7a:6e:08:46:ac:4d Speed:10000 Mtu:8900} {Name:e1a74bb495c9d9a MacAddress:56:56:84:0a:d2:ff Speed:10000 Mtu:8900} {Name:e457f58882ed9a2 MacAddress:92:79:df:13:1b:a5 Speed:10000 Mtu:8900} {Name:e5a5d91cfd17574 MacAddress:ca:f4:18:8f:14:44 Speed:10000 Mtu:8900} {Name:e67705a9ff72460 MacAddress:ae:b3:76:d8:8b:52 Speed:10000 Mtu:8900} {Name:ec5f0a537ae6568 MacAddress:22:15:1d:6a:74:39 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:0e:40:5e Speed:-1 Mtu:9000} {Name:f08d60c032a4906 MacAddress:76:4f:a5:aa:ec:b7 Speed:10000 Mtu:8900} {Name:f656606ac6df85f MacAddress:be:8d:09:68:49:20 Speed:10000 Mtu:8900} {Name:f9ba7cd773b8433 MacAddress:ca:08:9e:16:88:54 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:52:ad:85:17:24:3e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.894571 29458 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.894647 29458 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.894885 29458 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895026 29458 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895060 29458 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895310 29458 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895319 29458 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895329 29458 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895351 29458 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895392 29458 state_mem.go:36] "Initialized new in-memory state store" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895500 29458 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895561 29458 kubelet.go:418] "Attempting to sync node with API server" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895575 29458 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895595 29458 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895611 29458 kubelet.go:324] "Adding apiserver pod source" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.895631 29458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.897777 29458 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 08 22:13:50.898847 master-0 kubenswrapper[29458]: I0308 22:13:50.897900 29458 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902096 29458 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902236 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902253 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902268 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902276 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902283 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902290 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902298 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902305 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902314 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902322 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902333 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902361 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 22:13:50.902930 master-0 kubenswrapper[29458]: I0308 22:13:50.902407 29458 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 22:13:50.903675 master-0 kubenswrapper[29458]: I0308 22:13:50.903340 29458 server.go:1280] "Started kubelet" Mar 08 22:13:50.904279 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 08 22:13:50.911872 master-0 kubenswrapper[29458]: I0308 22:13:50.910818 29458 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 22:13:50.911872 master-0 kubenswrapper[29458]: I0308 22:13:50.911175 29458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 22:13:50.911872 master-0 kubenswrapper[29458]: I0308 22:13:50.911584 29458 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 08 22:13:50.913474 master-0 kubenswrapper[29458]: I0308 22:13:50.913318 29458 server.go:449] "Adding debug handlers to kubelet server" Mar 08 22:13:50.923205 master-0 kubenswrapper[29458]: I0308 22:13:50.918136 29458 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 22:13:50.923205 master-0 kubenswrapper[29458]: I0308 22:13:50.920994 29458 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 22:13:50.923871 master-0 kubenswrapper[29458]: I0308 22:13:50.923811 29458 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 22:13:50.928300 master-0 kubenswrapper[29458]: I0308 22:13:50.927310 29458 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 22:13:50.928300 master-0 kubenswrapper[29458]: I0308 22:13:50.927372 29458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 22:13:50.931243 master-0 kubenswrapper[29458]: I0308 22:13:50.930150 29458 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 21:47:40 +0000 UTC, rotation deadline is 2026-03-09 16:31:35.681796432 +0000 UTC Mar 08 22:13:50.931243 master-0 kubenswrapper[29458]: I0308 22:13:50.930361 29458 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h17m44.751446247s for next certificate rotation Mar 08 22:13:50.931243 master-0 kubenswrapper[29458]: I0308 22:13:50.930372 29458 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 22:13:50.931243 master-0 kubenswrapper[29458]: I0308 22:13:50.930409 29458 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 22:13:50.931243 master-0 kubenswrapper[29458]: I0308 22:13:50.930790 29458 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 08 22:13:50.933650 master-0 kubenswrapper[29458]: I0308 22:13:50.933573 29458 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 22:13:50.934857 master-0 kubenswrapper[29458]: I0308 22:13:50.934781 29458 factory.go:55] Registering systemd factory Mar 08 22:13:50.934857 master-0 kubenswrapper[29458]: I0308 22:13:50.934810 29458 factory.go:221] Registration of the systemd container factory successfully Mar 08 22:13:50.936158 master-0 kubenswrapper[29458]: I0308 22:13:50.936115 29458 factory.go:153] Registering CRI-O factory Mar 08 22:13:50.936158 master-0 kubenswrapper[29458]: I0308 22:13:50.936140 29458 factory.go:221] Registration of the crio container factory successfully Mar 08 22:13:50.936293 master-0 kubenswrapper[29458]: I0308 22:13:50.936243 29458 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 22:13:50.936293 master-0 kubenswrapper[29458]: I0308 22:13:50.936279 29458 factory.go:103] Registering Raw factory Mar 08 22:13:50.936427 master-0 kubenswrapper[29458]: I0308 22:13:50.936306 29458 manager.go:1196] Started watching for new ooms in manager Mar 08 22:13:50.942997 master-0 kubenswrapper[29458]: I0308 22:13:50.942947 29458 manager.go:319] Starting recovery of all containers Mar 08 22:13:50.952156 master-0 kubenswrapper[29458]: E0308 22:13:50.951974 29458 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.962545 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c50dd1f-fcbc-412c-a1cc-0738ea4464e0" volumeName="kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963103 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ef806a4-5486-43a9-8bfa-b1670c888dc1" volumeName="kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963132 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0269ed52-a753-49aa-9c38-c7aee23cebbd" volumeName="kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963146 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="345ca27a-f572-4efa-b0ce-dfa8243becd6" volumeName="kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963163 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-trusted-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963179 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c377685c-2024-4ef7-932d-5858eeb0d9bd" volumeName="kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963195 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963223 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83b5f0b6-adee-4820-8212-b4d182b178d2" volumeName="kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963272 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a7e92d4-b7ed-408b-b7cf-00209a627bea" volumeName="kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963285 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963297 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963312 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6fbc12f-3c27-4a7a-933f-43a55c960335" volumeName="kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963324 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66e50eed-e3ac-431f-931b-7c4c848c491b" volumeName="kubernetes.io/projected/66e50eed-e3ac-431f-931b-7c4c848c491b-kube-api-access-bjrqj" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963339 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7868a4fb-af89-4bdc-b41b-31f4ee59b5f3" volumeName="kubernetes.io/projected/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-kube-api-access-shdtk" seLinuxMountContext="" Mar 08 22:13:50.963310 master-0 kubenswrapper[29458]: I0308 22:13:50.963351 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6fbc12f-3c27-4a7a-933f-43a55c960335" volumeName="kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963366 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ef14467-bb62-462d-9dec-dee43e4cc9bd" volumeName="kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963414 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44e67e41-045e-42ef-8f60-6ef15606d6a2" volumeName="kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963427 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e2eb05c-eaa5-4d9b-abad-c0ef6835087e" volumeName="kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963440 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66e50eed-e3ac-431f-931b-7c4c848c491b" volumeName="kubernetes.io/empty-dir/66e50eed-e3ac-431f-931b-7c4c848c491b-snapshots" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963452 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89619d97-2c16-4e76-ba80-8b519f6a9366" volumeName="kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-catalog-content" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963465 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be431b74-1116-4b0f-8b25-bbb0408411b0" volumeName="kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963479 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c228b17c-fd7b-4273-ac03-eac5d4a3a4ad" volumeName="kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963654 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9e9c931-9595-42f1-bbc2-c412286f6cd1" volumeName="kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963705 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9" volumeName="kubernetes.io/secret/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963718 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2395900a-ff6b-46ff-92c6-a8a1b5675b67" volumeName="kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963754 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963770 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" volumeName="kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963843 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="971ffa86-4d52-4dc3-ba28-03d116ec3494" volumeName="kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963860 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3af41e9-c604-48da-bec5-df81c2ef3374" volumeName="kubernetes.io/projected/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-api-access-z2nfk" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963874 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963886 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8" volumeName="kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963918 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e2eb05c-eaa5-4d9b-abad-c0ef6835087e" volumeName="kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963930 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-serving-ca" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963968 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3af41e9-c604-48da-bec5-df81c2ef3374" volumeName="kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963980 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d063b330-4180-43de-a248-c573183d96f1" volumeName="kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.963994 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" volumeName="kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.964017 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a21e2296-10cb-4c70-ac3e-2173d35faac4" volumeName="kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.964030 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-serving-ca" seLinuxMountContext="" Mar 08 22:13:50.964103 master-0 kubenswrapper[29458]: I0308 22:13:50.964041 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.964057 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9" volumeName="kubernetes.io/projected/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-kube-api-access" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965050 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965065 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" volumeName="kubernetes.io/secret/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-catalogserver-certs" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965130 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c377685c-2024-4ef7-932d-5858eeb0d9bd" volumeName="kubernetes.io/projected/c377685c-2024-4ef7-932d-5858eeb0d9bd-kube-api-access-4z4s4" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965145 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965157 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ef14467-bb62-462d-9dec-dee43e4cc9bd" volumeName="kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965168 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="385e69e4-d443-44bb-8ee4-578a1c902c62" volumeName="kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965181 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965219 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44e67e41-045e-42ef-8f60-6ef15606d6a2" volumeName="kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965233 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" volumeName="kubernetes.io/empty-dir/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-cache" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965246 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="669ef8c8-8a32-4ebd-acc4-e8b2b45286a0" volumeName="kubernetes.io/projected/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-kube-api-access-jb2lv" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965260 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6eb502a1-db10-46ba-b698-461919464fb9" volumeName="kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965277 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965296 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965333 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965347 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" volumeName="kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-catalog-content" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965375 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965419 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" volumeName="kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965432 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" volumeName="kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965448 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965462 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-image-import-ca" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965476 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d063b330-4180-43de-a248-c573183d96f1" volumeName="kubernetes.io/projected/d063b330-4180-43de-a248-c573183d96f1-kube-api-access-8v2k8" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965493 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e38e989-41b8-4c80-99fb-8d414dda5da1" volumeName="kubernetes.io/projected/3e38e989-41b8-4c80-99fb-8d414dda5da1-kube-api-access-jp86m" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965506 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a21e2296-10cb-4c70-ac3e-2173d35faac4" volumeName="kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965520 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9" volumeName="kubernetes.io/configmap/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-service-ca" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965533 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="10e2e81b-cd18-4e30-b8ad-4cf105daea4a" volumeName="kubernetes.io/projected/10e2e81b-cd18-4e30-b8ad-4cf105daea4a-kube-api-access-sjndf" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965546 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ef14467-bb62-462d-9dec-dee43e4cc9bd" volumeName="kubernetes.io/projected/1ef14467-bb62-462d-9dec-dee43e4cc9bd-kube-api-access-6tfdv" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965560 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e2eb05c-eaa5-4d9b-abad-c0ef6835087e" volumeName="kubernetes.io/empty-dir/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-tmpfs" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965573 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965586 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b849f992-1020-4633-98be-75705b962fa9" volumeName="kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965622 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd9abe2b-f829-4376-9abe-7da0a08770e7" volumeName="kubernetes.io/projected/fd9abe2b-f829-4376-9abe-7da0a08770e7-kube-api-access-vxssr" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965635 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83b5f0b6-adee-4820-8212-b4d182b178d2" volumeName="kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965650 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965666 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2395900a-ff6b-46ff-92c6-a8a1b5675b67" volumeName="kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965679 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965693 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965709 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1207b6b-0517-46eb-9953-737f2bf1040d" volumeName="kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-utilities" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965722 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3af41e9-c604-48da-bec5-df81c2ef3374" volumeName="kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965735 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbc6c17-7c16-435f-9399-b6f1ddb6d17f" volumeName="kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965748 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89619d97-2c16-4e76-ba80-8b519f6a9366" volumeName="kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-utilities" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965761 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/projected/ecb3134a-ff4f-4069-8817-010b400296f6-kube-api-access-pq2ch" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965774 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb21214-292a-48ee-85e2-6b1e62f40cb4" volumeName="kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965788 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0641333-feda-44c5-baf5-ceee4ce3fd8f" volumeName="kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965800 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965813 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965828 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd9abe2b-f829-4376-9abe-7da0a08770e7" volumeName="kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965844 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66e50eed-e3ac-431f-931b-7c4c848c491b" volumeName="kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965858 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e38e989-41b8-4c80-99fb-8d414dda5da1" volumeName="kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965872 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" volumeName="kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-kube-api-access-ftn6p" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965885 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8" volumeName="kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965904 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66e50eed-e3ac-431f-931b-7c4c848c491b" volumeName="kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.965916 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00db426a-15d4-4737-a85e-b4cf6362c759" volumeName="kubernetes.io/projected/00db426a-15d4-4737-a85e-b4cf6362c759-kube-api-access-86mrp" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966105 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a7e92d4-b7ed-408b-b7cf-00209a627bea" volumeName="kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966129 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6bc6f78-2c5c-4add-925f-f6568a49c2cc" volumeName="kubernetes.io/projected/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-kube-api-access-c52wj" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966144 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966157 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0269ed52-a753-49aa-9c38-c7aee23cebbd" volumeName="kubernetes.io/empty-dir/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-textfile" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966171 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2395900a-ff6b-46ff-92c6-a8a1b5675b67" volumeName="kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966184 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966198 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eec590b-c536-4b16-a664-81bc3c74eef5" volumeName="kubernetes.io/projected/4eec590b-c536-4b16-a664-81bc3c74eef5-kube-api-access-k67bc" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966212 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" volumeName="kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966226 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-client" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966242 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966255 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8e00c74-fb72-4e3d-a22c-c38a4772a813" volumeName="kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966268 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b849f992-1020-4633-98be-75705b962fa9" volumeName="kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access" seLinuxMountContext="" Mar 08 22:13:50.966181 master-0 kubenswrapper[29458]: I0308 22:13:50.966291 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b849f992-1020-4633-98be-75705b962fa9" volumeName="kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966312 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d063b330-4180-43de-a248-c573183d96f1" volumeName="kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966327 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df48e7e0-0659-48e2-9b6a-32c964ff47b2" volumeName="kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966343 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0269ed52-a753-49aa-9c38-c7aee23cebbd" volumeName="kubernetes.io/projected/0269ed52-a753-49aa-9c38-c7aee23cebbd-kube-api-access-8fp4g" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966359 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e2eb05c-eaa5-4d9b-abad-c0ef6835087e" volumeName="kubernetes.io/projected/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-kube-api-access-lhp8w" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966372 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-encryption-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966385 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" volumeName="kubernetes.io/projected/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-kube-api-access-nvmk7" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966400 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de89c423-0f2a-440f-9fa9-92fefea84b09" volumeName="kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966437 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" volumeName="kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966455 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1207b6b-0517-46eb-9953-737f2bf1040d" volumeName="kubernetes.io/projected/b1207b6b-0517-46eb-9953-737f2bf1040d-kube-api-access-d2lsl" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966469 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2851c096-f5cb-4a46-a5a0-ac0b1341033b" volumeName="kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966481 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b5246dc-b715-4678-a3a9-878df57dd236" volumeName="kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966494 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f5ed55-225c-41e2-bc9d-b41063a604c9" volumeName="kubernetes.io/projected/81f5ed55-225c-41e2-bc9d-b41063a604c9-kube-api-access-7kz92" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966508 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-trusted-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966522 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966534 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="077643a2-ab2d-4f12-9abf-42a1c56d7aff" volumeName="kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-kube-api-access-mp26r" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966585 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966599 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c901b468-b8e9-48f8-8050-0d54e24e2adb" volumeName="kubernetes.io/projected/c901b468-b8e9-48f8-8050-0d54e24e2adb-kube-api-access-hmfqq" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966617 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966636 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9e9c931-9595-42f1-bbc2-c412286f6cd1" volumeName="kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966657 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df48e7e0-0659-48e2-9b6a-32c964ff47b2" volumeName="kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966674 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3fbcd83-a3e1-4de1-aceb-2692d348e495" volumeName="kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-tuned" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966696 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2395900a-ff6b-46ff-92c6-a8a1b5675b67" volumeName="kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966712 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2395900a-ff6b-46ff-92c6-a8a1b5675b67" volumeName="kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966728 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966774 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b358dcb7-d01f-4206-b636-b55a599a73bd" volumeName="kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966796 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8ef68b9-6f8d-4697-b269-91ee4e310752" volumeName="kubernetes.io/secret/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-key" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966810 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb21214-292a-48ee-85e2-6b1e62f40cb4" volumeName="kubernetes.io/configmap/0cb21214-292a-48ee-85e2-6b1e62f40cb4-config-volume" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966823 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e38e989-41b8-4c80-99fb-8d414dda5da1" volumeName="kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966835 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ef806a4-5486-43a9-8bfa-b1670c888dc1" volumeName="kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966848 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c377685c-2024-4ef7-932d-5858eeb0d9bd" volumeName="kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966860 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966875 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d01185-e485-4697-92c2-31a044f25d82" volumeName="kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966887 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" volumeName="kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966903 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7868a4fb-af89-4bdc-b41b-31f4ee59b5f3" volumeName="kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966919 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966940 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9e9c931-9595-42f1-bbc2-c412286f6cd1" volumeName="kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966960 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eec590b-c536-4b16-a664-81bc3c74eef5" volumeName="kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-utilities" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966973 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.966985 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0641333-feda-44c5-baf5-ceee4ce3fd8f" volumeName="kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967000 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" volumeName="kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967013 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f5ed55-225c-41e2-bc9d-b41063a604c9" volumeName="kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-metrics-certs" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967025 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a7e92d4-b7ed-408b-b7cf-00209a627bea" volumeName="kubernetes.io/projected/8a7e92d4-b7ed-408b-b7cf-00209a627bea-kube-api-access-qdz7m" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967038 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be431b74-1116-4b0f-8b25-bbb0408411b0" volumeName="kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967052 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9e9c931-9595-42f1-bbc2-c412286f6cd1" volumeName="kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967064 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967101 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="077643a2-ab2d-4f12-9abf-42a1c56d7aff" volumeName="kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967115 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb21214-292a-48ee-85e2-6b1e62f40cb4" volumeName="kubernetes.io/projected/0cb21214-292a-48ee-85e2-6b1e62f40cb4-kube-api-access-sg2dp" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967131 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="385e69e4-d443-44bb-8ee4-578a1c902c62" volumeName="kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967143 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967154 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7868a4fb-af89-4bdc-b41b-31f4ee59b5f3" volumeName="kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967168 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ef806a4-5486-43a9-8bfa-b1670c888dc1" volumeName="kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967184 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967196 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967209 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2851c096-f5cb-4a46-a5a0-ac0b1341033b" volumeName="kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967222 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="971ffa86-4d52-4dc3-ba28-03d116ec3494" volumeName="kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967250 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" volumeName="kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967264 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37bf82cb-adea-46d3-a899-136eb1d1f292" volumeName="kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967277 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de89c423-0f2a-440f-9fa9-92fefea84b09" volumeName="kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967290 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" volumeName="kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-utilities" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967305 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967318 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967332 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d01185-e485-4697-92c2-31a044f25d82" volumeName="kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967346 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9e9c931-9595-42f1-bbc2-c412286f6cd1" volumeName="kubernetes.io/projected/d9e9c931-9595-42f1-bbc2-c412286f6cd1-kube-api-access-znqrj" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967359 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967393 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0641333-feda-44c5-baf5-ceee4ce3fd8f" volumeName="kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967410 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ef14467-bb62-462d-9dec-dee43e4cc9bd" volumeName="kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967423 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2851c096-f5cb-4a46-a5a0-ac0b1341033b" volumeName="kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967436 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3af41e9-c604-48da-bec5-df81c2ef3374" volumeName="kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967450 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da51940a-a38f-4baf-9c14-b2f1f46b5aed" volumeName="kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967463 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8ef68b9-6f8d-4697-b269-91ee4e310752" volumeName="kubernetes.io/configmap/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-cabundle" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967476 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967492 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f5ed55-225c-41e2-bc9d-b41063a604c9" volumeName="kubernetes.io/configmap/81f5ed55-225c-41e2-bc9d-b41063a604c9-service-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967448 29458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967509 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3af41e9-c604-48da-bec5-df81c2ef3374" volumeName="kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967596 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e" volumeName="kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967617 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbc6c17-7c16-435f-9399-b6f1ddb6d17f" volumeName="kubernetes.io/projected/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-kube-api-access-gxxvr" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967632 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967645 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" volumeName="kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967658 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8" volumeName="kubernetes.io/projected/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-kube-api-access-dqkp4" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967668 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" volumeName="kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967680 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96a67acb-9cc6-4793-b99a-01479b239d76" volumeName="kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967688 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3fbcd83-a3e1-4de1-aceb-2692d348e495" volumeName="kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-tmp" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967697 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="971ffa86-4d52-4dc3-ba28-03d116ec3494" volumeName="kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967706 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6bc6f78-2c5c-4add-925f-f6568a49c2cc" volumeName="kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967715 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c228b17c-fd7b-4273-ac03-eac5d4a3a4ad" volumeName="kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967726 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6fbc12f-3c27-4a7a-933f-43a55c960335" volumeName="kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967735 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967744 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="081acedd-4c88-461f-80f3-e80fdbadb725" volumeName="kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967753 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6eb502a1-db10-46ba-b698-461919464fb9" volumeName="kubernetes.io/projected/6eb502a1-db10-46ba-b698-461919464fb9-kube-api-access-sjlqz" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967767 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-encryption-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967776 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967786 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967796 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/projected/a5afb146-31d7-4da9-8738-b6c15528233a-kube-api-access-mvp5b" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967805 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d01185-e485-4697-92c2-31a044f25d82" volumeName="kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967816 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e38e989-41b8-4c80-99fb-8d414dda5da1" volumeName="kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967825 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbc6c17-7c16-435f-9399-b6f1ddb6d17f" volumeName="kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967834 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4eec590b-c536-4b16-a664-81bc3c74eef5" volumeName="kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-catalog-content" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967844 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a913c639-ebfc-42a3-85cd-8a460027d3ec" volumeName="kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967852 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00db426a-15d4-4737-a85e-b4cf6362c759" volumeName="kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967862 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" volumeName="kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967870 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8e00c74-fb72-4e3d-a22c-c38a4772a813" volumeName="kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967879 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2851c096-f5cb-4a46-a5a0-ac0b1341033b" volumeName="kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967892 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4382d186-34e4-40af-9b92-bb17ddcaa23f" volumeName="kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967900 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" volumeName="kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967909 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8e00c74-fb72-4e3d-a22c-c38a4772a813" volumeName="kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967928 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b358dcb7-d01f-4206-b636-b55a599a73bd" volumeName="kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967944 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da51940a-a38f-4baf-9c14-b2f1f46b5aed" volumeName="kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967958 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de89c423-0f2a-440f-9fa9-92fefea84b09" volumeName="kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.967987 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0269ed52-a753-49aa-9c38-c7aee23cebbd" volumeName="kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968003 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="077643a2-ab2d-4f12-9abf-42a1c56d7aff" volumeName="kubernetes.io/empty-dir/077643a2-ab2d-4f12-9abf-42a1c56d7aff-cache" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968017 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b5246dc-b715-4678-a3a9-878df57dd236" volumeName="kubernetes.io/projected/4b5246dc-b715-4678-a3a9-878df57dd236-kube-api-access-hq7xb" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968030 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66e50eed-e3ac-431f-931b-7c4c848c491b" volumeName="kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968042 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-client" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968052 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968061 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dfe625a1-5ba4-491f-9ab3-5d91154961a0" volumeName="kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968088 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968099 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" volumeName="kubernetes.io/projected/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-kube-api-access-lpb8q" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968112 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e635b0da-956b-4636-bc9b-61f231241908" volumeName="kubernetes.io/secret/e635b0da-956b-4636-bc9b-61f231241908-tls-certificates" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968122 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c50dd1f-fcbc-412c-a1cc-0738ea4464e0" volumeName="kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968140 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a7e92d4-b7ed-408b-b7cf-00209a627bea" volumeName="kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968154 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6bc6f78-2c5c-4add-925f-f6568a49c2cc" volumeName="kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968167 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c377685c-2024-4ef7-932d-5858eeb0d9bd" volumeName="kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968183 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da51940a-a38f-4baf-9c14-b2f1f46b5aed" volumeName="kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968196 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d0feb73-2ef6-4083-81ce-82a1394ce9c4" volumeName="kubernetes.io/projected/0d0feb73-2ef6-4083-81ce-82a1394ce9c4-kube-api-access-jfpt7" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968207 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" volumeName="kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968217 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbc6c17-7c16-435f-9399-b6f1ddb6d17f" volumeName="kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968227 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89619d97-2c16-4e76-ba80-8b519f6a9366" volumeName="kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968237 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3af41e9-c604-48da-bec5-df81c2ef3374" volumeName="kubernetes.io/empty-dir/c3af41e9-c604-48da-bec5-df81c2ef3374-volume-directive-shadow" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968246 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" volumeName="kubernetes.io/projected/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-kube-api-access-w5t9m" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968255 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="385e69e4-d443-44bb-8ee4-578a1c902c62" volumeName="kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968271 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f5ed55-225c-41e2-bc9d-b41063a604c9" volumeName="kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-stats-auth" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968281 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8ef68b9-6f8d-4697-b269-91ee4e310752" volumeName="kubernetes.io/projected/e8ef68b9-6f8d-4697-b269-91ee4e310752-kube-api-access-6ht4t" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968297 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1207b6b-0517-46eb-9953-737f2bf1040d" volumeName="kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-catalog-content" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968307 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" volumeName="kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968317 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0269ed52-a753-49aa-9c38-c7aee23cebbd" volumeName="kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968327 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" volumeName="kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-ca-certs" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968341 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5afb146-31d7-4da9-8738-b6c15528233a" volumeName="kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-audit-policies" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968350 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da51940a-a38f-4baf-9c14-b2f1f46b5aed" volumeName="kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968364 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b5246dc-b715-4678-a3a9-878df57dd236" volumeName="kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968380 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81f5ed55-225c-41e2-bc9d-b41063a604c9" volumeName="kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-default-certificate" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968395 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d063b330-4180-43de-a248-c573183d96f1" volumeName="kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968412 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecb3134a-ff4f-4069-8817-010b400296f6" volumeName="kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968423 29458 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3fbcd83-a3e1-4de1-aceb-2692d348e495" volumeName="kubernetes.io/projected/f3fbcd83-a3e1-4de1-aceb-2692d348e495-kube-api-access-5jwf9" seLinuxMountContext="" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968438 29458 reconstruct.go:97] "Volume reconstruction finished" Mar 08 22:13:50.968912 master-0 kubenswrapper[29458]: I0308 22:13:50.968451 29458 reconciler.go:26] "Reconciler: start to sync state" Mar 08 22:13:50.975715 master-0 kubenswrapper[29458]: I0308 22:13:50.971256 29458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 22:13:50.975715 master-0 kubenswrapper[29458]: I0308 22:13:50.971354 29458 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 22:13:50.975715 master-0 kubenswrapper[29458]: I0308 22:13:50.971461 29458 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 22:13:50.975715 master-0 kubenswrapper[29458]: E0308 22:13:50.971632 29458 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 08 22:13:50.975715 master-0 kubenswrapper[29458]: I0308 22:13:50.972299 29458 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 08 22:13:50.979111 master-0 kubenswrapper[29458]: I0308 22:13:50.977886 29458 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 22:13:50.991063 master-0 kubenswrapper[29458]: I0308 22:13:50.991005 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-qv4bv_2a91f36f-900e-4b99-9be1-dfc61d8e31d9/manager/1.log" Mar 08 22:13:50.991661 master-0 kubenswrapper[29458]: I0308 22:13:50.991610 29458 generic.go:334] "Generic (PLEG): container finished" podID="2a91f36f-900e-4b99-9be1-dfc61d8e31d9" containerID="bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb" exitCode=1 Mar 08 22:13:50.998660 master-0 kubenswrapper[29458]: I0308 22:13:50.998614 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-dvgxg_d9fe466f-5a23-4f69-8a96-44bd5d6194f5/cluster-autoscaler-operator/0.log" Mar 08 22:13:50.999391 master-0 kubenswrapper[29458]: I0308 22:13:50.999327 29458 generic.go:334] "Generic (PLEG): container finished" podID="d9fe466f-5a23-4f69-8a96-44bd5d6194f5" containerID="d28b9b684de2ee6afb8af986b004969105b39b6920f35f943824b725390ab335" exitCode=255 Mar 08 22:13:51.010715 master-0 kubenswrapper[29458]: I0308 22:13:51.010668 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-nk294_077643a2-ab2d-4f12-9abf-42a1c56d7aff/manager/1.log" Mar 08 22:13:51.011416 master-0 kubenswrapper[29458]: I0308 22:13:51.011334 29458 generic.go:334] "Generic (PLEG): container finished" podID="077643a2-ab2d-4f12-9abf-42a1c56d7aff" containerID="a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321" exitCode=1 Mar 08 22:13:51.021925 master-0 kubenswrapper[29458]: I0308 22:13:51.021862 29458 generic.go:334] "Generic (PLEG): container finished" podID="1232f59f-4e6a-46ef-8bec-1bd4e04956ef" containerID="9c0dad4facbead9173c18e63c1454c1d466a90a1041e6859864e005008acb001" exitCode=0 Mar 08 22:13:51.037454 master-0 kubenswrapper[29458]: I0308 22:13:51.037291 29458 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="db4187056969875e15e546fde8b086c9df68d0dfd1ba3b2a7d33cdf8f2598f9a" exitCode=0 Mar 08 22:13:51.037454 master-0 kubenswrapper[29458]: I0308 22:13:51.037345 29458 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="ba570d5274abc3eff808a6feca603573aedab7307cfb102965df1c84daee657a" exitCode=0 Mar 08 22:13:51.037454 master-0 kubenswrapper[29458]: I0308 22:13:51.037357 29458 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="719c0f1133120f686febe97b7386aa26236fdb7648305df23056b3e40ec22875" exitCode=0 Mar 08 22:13:51.037454 master-0 kubenswrapper[29458]: I0308 22:13:51.037368 29458 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="332b44c02955cc191872da4d797a1cc566a290dcc3b5e3b8b9e49f2a86f283e8" exitCode=0 Mar 08 22:13:51.037454 master-0 kubenswrapper[29458]: I0308 22:13:51.037378 29458 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="8100187bff84fd39b1869b62c92c77062e916e1f9e3462572f5572d1caef3b83" exitCode=0 Mar 08 22:13:51.037454 master-0 kubenswrapper[29458]: I0308 22:13:51.037390 29458 generic.go:334] "Generic (PLEG): container finished" podID="96a67acb-9cc6-4793-b99a-01479b239d76" containerID="1de5c137bbb7c8c06869f9101463a33e4cb94c8693913396854f5dedf16bf314" exitCode=0 Mar 08 22:13:51.043701 master-0 kubenswrapper[29458]: I0308 22:13:51.043654 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-xwmmm_d9e9c931-9595-42f1-bbc2-c412286f6cd1/cluster-baremetal-operator/2.log" Mar 08 22:13:51.044211 master-0 kubenswrapper[29458]: I0308 22:13:51.044117 29458 generic.go:334] "Generic (PLEG): container finished" podID="d9e9c931-9595-42f1-bbc2-c412286f6cd1" containerID="f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e" exitCode=1 Mar 08 22:13:51.051231 master-0 kubenswrapper[29458]: I0308 22:13:51.051050 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-cjdgr_84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed/ingress-operator/4.log" Mar 08 22:13:51.051905 master-0 kubenswrapper[29458]: I0308 22:13:51.051830 29458 generic.go:334] "Generic (PLEG): container finished" podID="84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed" containerID="31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18" exitCode=1 Mar 08 22:13:51.058522 master-0 kubenswrapper[29458]: I0308 22:13:51.058452 29458 generic.go:334] "Generic (PLEG): container finished" podID="89619d97-2c16-4e76-ba80-8b519f6a9366" containerID="45472acd22cf9f28bd94833449b2d75f0a3377af69685e85fac8637f3aa96e29" exitCode=0 Mar 08 22:13:51.058522 master-0 kubenswrapper[29458]: I0308 22:13:51.058507 29458 generic.go:334] "Generic (PLEG): container finished" podID="89619d97-2c16-4e76-ba80-8b519f6a9366" containerID="b4991335150a6ed2fd7eec9480c2030f976e4351bd9e24d23f766eaa04158aae" exitCode=0 Mar 08 22:13:51.062226 master-0 kubenswrapper[29458]: I0308 22:13:51.062177 29458 generic.go:334] "Generic (PLEG): container finished" podID="4eec590b-c536-4b16-a664-81bc3c74eef5" containerID="cf1d608cd8e4a27484068f303828c57cd8c70b10159e81ee0191eb215e9cb4eb" exitCode=0 Mar 08 22:13:51.062226 master-0 kubenswrapper[29458]: I0308 22:13:51.062218 29458 generic.go:334] "Generic (PLEG): container finished" podID="4eec590b-c536-4b16-a664-81bc3c74eef5" containerID="4562b61799ee566a79cea44db886dae16855feb38419004f25ad733f55567059" exitCode=0 Mar 08 22:13:51.072822 master-0 kubenswrapper[29458]: E0308 22:13:51.072731 29458 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 22:13:51.073575 master-0 kubenswrapper[29458]: I0308 22:13:51.073523 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c633355a-b323-4458-8ecb-1e490d115f59/installer/0.log" Mar 08 22:13:51.073646 master-0 kubenswrapper[29458]: I0308 22:13:51.073593 29458 generic.go:334] "Generic (PLEG): container finished" podID="c633355a-b323-4458-8ecb-1e490d115f59" containerID="28682516e11b7da515d28696337779453c2c96bd4cf9bfd8a8b3aa00aef7307b" exitCode=1 Mar 08 22:13:51.080171 master-0 kubenswrapper[29458]: I0308 22:13:51.080025 29458 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051" exitCode=0 Mar 08 22:13:51.087109 master-0 kubenswrapper[29458]: I0308 22:13:51.087017 29458 generic.go:334] "Generic (PLEG): container finished" podID="de89c423-0f2a-440f-9fa9-92fefea84b09" containerID="c1e691e59e7c1bed851b1abd3631d646daa0cf480534e0faeca027a9151c11dc" exitCode=0 Mar 08 22:13:51.087931 master-0 kubenswrapper[29458]: I0308 22:13:51.087127 29458 generic.go:334] "Generic (PLEG): container finished" podID="de89c423-0f2a-440f-9fa9-92fefea84b09" containerID="524292da38fe899d291d24e77e4f5efb26dbdfacb31c02270a11c8d9d08d5284" exitCode=0 Mar 08 22:13:51.087931 master-0 kubenswrapper[29458]: I0308 22:13:51.087142 29458 generic.go:334] "Generic (PLEG): container finished" podID="de89c423-0f2a-440f-9fa9-92fefea84b09" containerID="72b0e6a3cc3f97f5e2663934796c3814c98efd81ba66b9d9762bd04c86de3111" exitCode=0 Mar 08 22:13:51.094059 master-0 kubenswrapper[29458]: I0308 22:13:51.094001 29458 generic.go:334] "Generic (PLEG): container finished" podID="ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a" containerID="25dcfb26438ac1a8e2908fd8e10cac8fb870f8887f8afa80fca87f762351557e" exitCode=0 Mar 08 22:13:51.097888 master-0 kubenswrapper[29458]: I0308 22:13:51.097853 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_ee0b93ec-6ea0-4704-9449-57781a482ce4/installer/0.log" Mar 08 22:13:51.097969 master-0 kubenswrapper[29458]: I0308 22:13:51.097929 29458 generic.go:334] "Generic (PLEG): container finished" podID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerID="c38d9f8500098eb10c48b40a07d5d0aefa68c69ce87a29f847a74bc382b44913" exitCode=1 Mar 08 22:13:51.112818 master-0 kubenswrapper[29458]: I0308 22:13:51.112748 29458 generic.go:334] "Generic (PLEG): container finished" podID="1d188983-1f19-4c8e-b604-034bd6308139" containerID="457fd83835c6efbf11a60689076f6b36dc5b753b2b41e47858b503eb7cab62fc" exitCode=0 Mar 08 22:13:51.123993 master-0 kubenswrapper[29458]: I0308 22:13:51.123925 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/4.log" Mar 08 22:13:51.124399 master-0 kubenswrapper[29458]: I0308 22:13:51.123999 29458 generic.go:334] "Generic (PLEG): container finished" podID="c901b468-b8e9-48f8-8050-0d54e24e2adb" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" exitCode=1 Mar 08 22:13:51.129440 master-0 kubenswrapper[29458]: I0308 22:13:51.129401 29458 generic.go:334] "Generic (PLEG): container finished" podID="d0641333-feda-44c5-baf5-ceee4ce3fd8f" containerID="ba63e07913394038e6214607c806df6fc81079644bc68ca5910ad463422e98db" exitCode=0 Mar 08 22:13:51.129440 master-0 kubenswrapper[29458]: I0308 22:13:51.129432 29458 generic.go:334] "Generic (PLEG): container finished" podID="d0641333-feda-44c5-baf5-ceee4ce3fd8f" containerID="0a07d531f2a5fce4c32633615b34d340e2c1873fb062556ca27529a7a07f33ff" exitCode=0 Mar 08 22:13:51.134942 master-0 kubenswrapper[29458]: I0308 22:13:51.134000 29458 generic.go:334] "Generic (PLEG): container finished" podID="b849f992-1020-4633-98be-75705b962fa9" containerID="8a52489302a5dc96ab51b546dab29cb1d4fff7df453456bacfb9302f4b296bd5" exitCode=0 Mar 08 22:13:51.136488 master-0 kubenswrapper[29458]: I0308 22:13:51.136443 29458 generic.go:334] "Generic (PLEG): container finished" podID="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" containerID="1e3bba86fc611770354755d87c02e967df54a626a16a1218a0b91a1d1f5b23e2" exitCode=0 Mar 08 22:13:51.136488 master-0 kubenswrapper[29458]: I0308 22:13:51.136477 29458 generic.go:334] "Generic (PLEG): container finished" podID="088eecd9-a153-4fe0-af5a-78f5bdc0eb6b" containerID="17354f9a78986dd3c8de787a809b49886d6ee3c4cad78116a2e66e3dae4db975" exitCode=0 Mar 08 22:13:51.138256 master-0 kubenswrapper[29458]: I0308 22:13:51.138217 29458 generic.go:334] "Generic (PLEG): container finished" podID="b1207b6b-0517-46eb-9953-737f2bf1040d" containerID="d9ffb5341e8b8d84c9e35bd2c9065a3beacd71fe2f5c3020b9ea1e20dc28e517" exitCode=0 Mar 08 22:13:51.138256 master-0 kubenswrapper[29458]: I0308 22:13:51.138246 29458 generic.go:334] "Generic (PLEG): container finished" podID="b1207b6b-0517-46eb-9953-737f2bf1040d" containerID="da72619d44af489aac6baf5a28a18d7d685dca71b43deb1db98d79497a18fa19" exitCode=0 Mar 08 22:13:51.143449 master-0 kubenswrapper[29458]: I0308 22:13:51.140554 29458 generic.go:334] "Generic (PLEG): container finished" podID="04fb7bdb-fb5a-4187-94a3-67c8f09684ed" containerID="f871c547308cba5a44237c75ff4479c8163cef5b1e2a7ff5964a521c14faec67" exitCode=0 Mar 08 22:13:51.145547 master-0 kubenswrapper[29458]: I0308 22:13:51.145381 29458 generic.go:334] "Generic (PLEG): container finished" podID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerID="2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39" exitCode=0 Mar 08 22:13:51.147585 master-0 kubenswrapper[29458]: I0308 22:13:51.147547 29458 generic.go:334] "Generic (PLEG): container finished" podID="081acedd-4c88-461f-80f3-e80fdbadb725" containerID="b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5" exitCode=0 Mar 08 22:13:51.151863 master-0 kubenswrapper[29458]: I0308 22:13:51.151778 29458 generic.go:334] "Generic (PLEG): container finished" podID="2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8" containerID="566f64e1e5f69c2bf95c8075567ff0feb7dd0877a1f2fce23e6ae2446c0dbdb2" exitCode=0 Mar 08 22:13:51.159081 master-0 kubenswrapper[29458]: I0308 22:13:51.158952 29458 generic.go:334] "Generic (PLEG): container finished" podID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerID="3eb560de291b5a27e85796d034a6bc8bf292b3b1a9fe462699eef23cc0bb8a73" exitCode=0 Mar 08 22:13:51.163681 master-0 kubenswrapper[29458]: I0308 22:13:51.163631 29458 generic.go:334] "Generic (PLEG): container finished" podID="f6fbc12f-3c27-4a7a-933f-43a55c960335" containerID="9e2fd1210b8809e9723f044551eadfefcc58034be22d2af001446424e236d937" exitCode=0 Mar 08 22:13:51.168470 master-0 kubenswrapper[29458]: I0308 22:13:51.167115 29458 generic.go:334] "Generic (PLEG): container finished" podID="a8e00c74-fb72-4e3d-a22c-c38a4772a813" containerID="e72afc2085d471295428d0c6e91b91b2d9a4e2a26d7688d062fbd6d0d26453eb" exitCode=0 Mar 08 22:13:51.169743 master-0 kubenswrapper[29458]: I0308 22:13:51.169710 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-x5zxr_be431b74-1116-4b0f-8b25-bbb0408411b0/package-server-manager/0.log" Mar 08 22:13:51.171184 master-0 kubenswrapper[29458]: I0308 22:13:51.171129 29458 generic.go:334] "Generic (PLEG): container finished" podID="be431b74-1116-4b0f-8b25-bbb0408411b0" containerID="337d76d1f849217e44f712b0d4de222e21178a127e60c214aafe729c50460441" exitCode=1 Mar 08 22:13:51.173160 master-0 kubenswrapper[29458]: I0308 22:13:51.173100 29458 generic.go:334] "Generic (PLEG): container finished" podID="e8ef68b9-6f8d-4697-b269-91ee4e310752" containerID="3724b6db595f74186edc6baea18527f6eae9fe894eef0ca447fc3a5e5c129bfc" exitCode=0 Mar 08 22:13:51.191462 master-0 kubenswrapper[29458]: I0308 22:13:51.191395 29458 generic.go:334] "Generic (PLEG): container finished" podID="971ffa86-4d52-4dc3-ba28-03d116ec3494" containerID="876653e3eaf25a649c1577e2202b14fc9e4231bce10bcb04ae36299b1eb1609e" exitCode=0 Mar 08 22:13:51.202386 master-0 kubenswrapper[29458]: I0308 22:13:51.201111 29458 generic.go:334] "Generic (PLEG): container finished" podID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerID="075540abc9ccd6697e1ff04ade4d337fce9916d26b47b35e3ef665f65e8db6d7" exitCode=0 Mar 08 22:13:51.211690 master-0 kubenswrapper[29458]: I0308 22:13:51.211625 29458 generic.go:334] "Generic (PLEG): container finished" podID="3e38e989-41b8-4c80-99fb-8d414dda5da1" containerID="6ed8d9b29a081602db7df52fa208e1ced8636f34e50cd9dbcb9d6a6d48cd183e" exitCode=0 Mar 08 22:13:51.246066 master-0 kubenswrapper[29458]: I0308 22:13:51.245622 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-trhtl_dfe625a1-5ba4-491f-9ab3-5d91154961a0/approver/1.log" Mar 08 22:13:51.249495 master-0 kubenswrapper[29458]: I0308 22:13:51.249420 29458 generic.go:334] "Generic (PLEG): container finished" podID="dfe625a1-5ba4-491f-9ab3-5d91154961a0" containerID="6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3" exitCode=1 Mar 08 22:13:51.252596 master-0 kubenswrapper[29458]: I0308 22:13:51.252552 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4lpf_2851c096-f5cb-4a46-a5a0-ac0b1341033b/cluster-node-tuning-operator/0.log" Mar 08 22:13:51.252659 master-0 kubenswrapper[29458]: I0308 22:13:51.252605 29458 generic.go:334] "Generic (PLEG): container finished" podID="2851c096-f5cb-4a46-a5a0-ac0b1341033b" containerID="9a488623b815fc824bec74857e2960fc417072b53ab920bd8c886dd1a94fa5ae" exitCode=1 Mar 08 22:13:51.258221 master-0 kubenswrapper[29458]: I0308 22:13:51.258152 29458 generic.go:334] "Generic (PLEG): container finished" podID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerID="8b1f61f93e111d7a59ff7b3af6ad621f3547dafb0a9264256b214c4d46121676" exitCode=0 Mar 08 22:13:51.278271 master-0 kubenswrapper[29458]: I0308 22:13:51.264150 29458 generic.go:334] "Generic (PLEG): container finished" podID="0d851f97-b21e-432e-a4c3-dc0a8ff00e84" containerID="539c0747d69e37b439f9d78ced15438e6d882433e87666140b9b0adafe3b7125" exitCode=0 Mar 08 22:13:51.278271 master-0 kubenswrapper[29458]: I0308 22:13:51.269579 29458 generic.go:334] "Generic (PLEG): container finished" podID="b6bc6f78-2c5c-4add-925f-f6568a49c2cc" containerID="ea9d698fbce1d205747d5157a6c744e1ac0246ad5c16718bbe3cc568d31c44f2" exitCode=0 Mar 08 22:13:51.278271 master-0 kubenswrapper[29458]: I0308 22:13:51.271925 29458 generic.go:334] "Generic (PLEG): container finished" podID="a913c639-ebfc-42a3-85cd-8a460027d3ec" containerID="8bf41d7f7f99e2d4fdb83a25a837511d4994d2551b185499c8662f2b6ce0defe" exitCode=0 Mar 08 22:13:51.278271 master-0 kubenswrapper[29458]: E0308 22:13:51.272892 29458 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 22:13:51.278271 master-0 kubenswrapper[29458]: I0308 22:13:51.276022 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-64gfj_1ef14467-bb62-462d-9dec-dee43e4cc9bd/machine-api-operator/0.log" Mar 08 22:13:51.278271 master-0 kubenswrapper[29458]: I0308 22:13:51.276994 29458 generic.go:334] "Generic (PLEG): container finished" podID="1ef14467-bb62-462d-9dec-dee43e4cc9bd" containerID="8c5935d4c8ced0d1522d2fa823597581df0f0db73a8f0870aa81ef671ab128d8" exitCode=255 Mar 08 22:13:51.283331 master-0 kubenswrapper[29458]: I0308 22:13:51.283272 29458 generic.go:334] "Generic (PLEG): container finished" podID="37bf82cb-adea-46d3-a899-136eb1d1f292" containerID="04944f14b53d02d121f70fd7c26fd29d16bc18bb4704e5d81fc7ee613027b6bb" exitCode=0 Mar 08 22:13:51.285048 master-0 kubenswrapper[29458]: I0308 22:13:51.285001 29458 generic.go:334] "Generic (PLEG): container finished" podID="0269ed52-a753-49aa-9c38-c7aee23cebbd" containerID="c9cab6e5817c1932a6f2978d3ea0dfca3946b25467cd7fa690d906acf2f08a77" exitCode=0 Mar 08 22:13:51.286401 master-0 kubenswrapper[29458]: I0308 22:13:51.286371 29458 generic.go:334] "Generic (PLEG): container finished" podID="c228b17c-fd7b-4273-ac03-eac5d4a3a4ad" containerID="9d57fc4d1e08b9fa4f826dec76d98ab4964d370b21a4f1f3de9ac2217b28ef10" exitCode=0 Mar 08 22:13:51.288016 master-0 kubenswrapper[29458]: I0308 22:13:51.287901 29458 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9" exitCode=1 Mar 08 22:13:51.289316 master-0 kubenswrapper[29458]: I0308 22:13:51.289284 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-stxvg_4cbc6c17-7c16-435f-9399-b6f1ddb6d17f/machine-approver-controller/0.log" Mar 08 22:13:51.289620 master-0 kubenswrapper[29458]: I0308 22:13:51.289580 29458 generic.go:334] "Generic (PLEG): container finished" podID="4cbc6c17-7c16-435f-9399-b6f1ddb6d17f" containerID="4c252b52dc72b4cf9a688685e68fed111ec3680baa86d43719d7d70d42220e79" exitCode=255 Mar 08 22:13:51.292426 master-0 kubenswrapper[29458]: I0308 22:13:51.292122 29458 generic.go:334] "Generic (PLEG): container finished" podID="3f1a7900-a0b2-47fc-b43c-a0a5dee6b657" containerID="85d980d0ad1f366d812777a55826b75d7182615f3739f55dd1c63103d4d0380c" exitCode=0 Mar 08 22:13:51.313714 master-0 kubenswrapper[29458]: I0308 22:13:51.313600 29458 generic.go:334] "Generic (PLEG): container finished" podID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerID="5ccbb8ad117a453ccde6adce287311d7e602ee66003c156725015647e77006f5" exitCode=0 Mar 08 22:13:51.333098 master-0 kubenswrapper[29458]: I0308 22:13:51.329167 29458 generic.go:334] "Generic (PLEG): container finished" podID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" exitCode=0 Mar 08 22:13:51.348464 master-0 kubenswrapper[29458]: I0308 22:13:51.342351 29458 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="d96629c1f566486e43c8e0582e2c2eba47afa3a936c512881f234861d282525c" exitCode=0 Mar 08 22:13:51.348464 master-0 kubenswrapper[29458]: I0308 22:13:51.342395 29458 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="048081af0d4f2d7c89ebdb9c25d0b6b144830ec123396e7ecad6567e008c8334" exitCode=0 Mar 08 22:13:51.348464 master-0 kubenswrapper[29458]: I0308 22:13:51.342404 29458 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="9b3f703e2b5dc4f53836c052b0708a079abf7ba89e449465ae68fb01236cf52d" exitCode=0 Mar 08 22:13:51.348464 master-0 kubenswrapper[29458]: I0308 22:13:51.345535 29458 generic.go:334] "Generic (PLEG): container finished" podID="f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9" containerID="a22b29816e03690faf00c5c6d5f7ea0b06750cd2c50fe9f666b86154f5e12d55" exitCode=0 Mar 08 22:13:51.352348 master-0 kubenswrapper[29458]: I0308 22:13:51.352290 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_78dc543f-66ed-4098-b5a9-699ec2ccc856/installer/0.log" Mar 08 22:13:51.352419 master-0 kubenswrapper[29458]: I0308 22:13:51.352360 29458 generic.go:334] "Generic (PLEG): container finished" podID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerID="b72861ea5791b8527c79a3ba9ca252aad4949d7fe333b8f4afa8d681aa68f9d1" exitCode=1 Mar 08 22:13:51.381105 master-0 kubenswrapper[29458]: I0308 22:13:51.376061 29458 generic.go:334] "Generic (PLEG): container finished" podID="a21e2296-10cb-4c70-ac3e-2173d35faac4" containerID="d653a3f99cf80e74726e1b1340ca117861fb6803c0c0eb0b6d0a40207c074c3a" exitCode=0 Mar 08 22:13:51.381105 master-0 kubenswrapper[29458]: I0308 22:13:51.378438 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 22:13:51.381105 master-0 kubenswrapper[29458]: I0308 22:13:51.378861 29458 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b" exitCode=1 Mar 08 22:13:51.381105 master-0 kubenswrapper[29458]: I0308 22:13:51.378881 29458 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="19b1636ab72d9a9b9983713d62f8565fb7c16719c6345915ce9c3d89fbded136" exitCode=0 Mar 08 22:13:51.385319 master-0 kubenswrapper[29458]: I0308 22:13:51.385265 29458 generic.go:334] "Generic (PLEG): container finished" podID="a5afb146-31d7-4da9-8738-b6c15528233a" containerID="1f70617dd998f936fb35fbf67cf4dddc810c8e16cdc8c2b46a2145b980e52414" exitCode=0 Mar 08 22:13:51.389031 master-0 kubenswrapper[29458]: I0308 22:13:51.388967 29458 generic.go:334] "Generic (PLEG): container finished" podID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerID="11d598a821a501bbacbf414ba9cb9b4053b94492a8ef82c31d41892148ed5df2" exitCode=0 Mar 08 22:13:51.405341 master-0 kubenswrapper[29458]: I0308 22:13:51.400748 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/config-sync-controllers/0.log" Mar 08 22:13:51.405341 master-0 kubenswrapper[29458]: I0308 22:13:51.401251 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh_d063b330-4180-43de-a248-c573183d96f1/cluster-cloud-controller-manager/0.log" Mar 08 22:13:51.405341 master-0 kubenswrapper[29458]: I0308 22:13:51.401313 29458 generic.go:334] "Generic (PLEG): container finished" podID="d063b330-4180-43de-a248-c573183d96f1" containerID="f35f20071c5b0df4134c3bd22227a8034ca2417ef7250451b3ec29b800fa74dc" exitCode=1 Mar 08 22:13:51.405341 master-0 kubenswrapper[29458]: I0308 22:13:51.401334 29458 generic.go:334] "Generic (PLEG): container finished" podID="d063b330-4180-43de-a248-c573183d96f1" containerID="6db16eaa3133d25587d14c0b9e526e3d55af3b3bbd2fa785bac1c1b404fb50fd" exitCode=1 Mar 08 22:13:51.407255 master-0 kubenswrapper[29458]: I0308 22:13:51.407218 29458 generic.go:334] "Generic (PLEG): container finished" podID="7e0267ba-5dd7-4e81-885f-95b27a7b42ea" containerID="852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e" exitCode=0 Mar 08 22:13:51.410891 master-0 kubenswrapper[29458]: I0308 22:13:51.410855 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0/installer/0.log" Mar 08 22:13:51.411012 master-0 kubenswrapper[29458]: I0308 22:13:51.410899 29458 generic.go:334] "Generic (PLEG): container finished" podID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerID="23ca4cac0c50a9d156ec6ed1b11f780e700b2306444f16b3646285a8a0f6b21b" exitCode=1 Mar 08 22:13:51.417887 master-0 kubenswrapper[29458]: I0308 22:13:51.417844 29458 generic.go:334] "Generic (PLEG): container finished" podID="81f5ed55-225c-41e2-bc9d-b41063a604c9" containerID="b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7" exitCode=0 Mar 08 22:13:51.420152 master-0 kubenswrapper[29458]: I0308 22:13:51.420048 29458 generic.go:334] "Generic (PLEG): container finished" podID="66e50eed-e3ac-431f-931b-7c4c848c491b" containerID="bd2fcdaa2b69646a1f5d77c5acf0088cc640d06a976607ae2c22145452d4676a" exitCode=0 Mar 08 22:13:51.424556 master-0 kubenswrapper[29458]: I0308 22:13:51.424509 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-c246n_6eb502a1-db10-46ba-b698-461919464fb9/control-plane-machine-set-operator/1.log" Mar 08 22:13:51.424654 master-0 kubenswrapper[29458]: I0308 22:13:51.424582 29458 generic.go:334] "Generic (PLEG): container finished" podID="6eb502a1-db10-46ba-b698-461919464fb9" containerID="91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2" exitCode=1 Mar 08 22:13:51.444023 master-0 kubenswrapper[29458]: I0308 22:13:51.443232 29458 generic.go:334] "Generic (PLEG): container finished" podID="d4d01185-e485-4697-92c2-31a044f25d82" containerID="5af2147c5b6156b079ec16c643f5bc1c46f463b8da9a0f84030507704a3988c2" exitCode=0 Mar 08 22:13:51.448000 master-0 kubenswrapper[29458]: I0308 22:13:51.446168 29458 generic.go:334] "Generic (PLEG): container finished" podID="4382d186-34e4-40af-9b92-bb17ddcaa23f" containerID="41b89fabe8bcfa93d37c680741df23c997dd23bfef1e93509706508b89ba3e17" exitCode=0 Mar 08 22:13:51.600965 master-0 kubenswrapper[29458]: I0308 22:13:51.600917 29458 manager.go:324] Recovery completed Mar 08 22:13:51.673524 master-0 kubenswrapper[29458]: E0308 22:13:51.673430 29458 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 22:13:51.711664 master-0 kubenswrapper[29458]: I0308 22:13:51.711628 29458 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 22:13:51.711979 master-0 kubenswrapper[29458]: I0308 22:13:51.711966 29458 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 22:13:51.712104 master-0 kubenswrapper[29458]: I0308 22:13:51.712092 29458 state_mem.go:36] "Initialized new in-memory state store" Mar 08 22:13:51.712740 master-0 kubenswrapper[29458]: I0308 22:13:51.712725 29458 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 08 22:13:51.712867 master-0 kubenswrapper[29458]: I0308 22:13:51.712839 29458 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 08 22:13:51.712934 master-0 kubenswrapper[29458]: I0308 22:13:51.712925 29458 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 08 22:13:51.712988 master-0 kubenswrapper[29458]: I0308 22:13:51.712979 29458 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 08 22:13:51.713036 master-0 kubenswrapper[29458]: I0308 22:13:51.713028 29458 policy_none.go:49] "None policy: Start" Mar 08 22:13:51.721455 master-0 kubenswrapper[29458]: I0308 22:13:51.721378 29458 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 22:13:51.721455 master-0 kubenswrapper[29458]: I0308 22:13:51.721462 29458 state_mem.go:35] "Initializing new in-memory state store" Mar 08 22:13:51.721899 master-0 kubenswrapper[29458]: I0308 22:13:51.721864 29458 state_mem.go:75] "Updated machine memory state" Mar 08 22:13:51.721899 master-0 kubenswrapper[29458]: I0308 22:13:51.721889 29458 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 08 22:13:51.744683 master-0 kubenswrapper[29458]: I0308 22:13:51.744616 29458 manager.go:334] "Starting Device Plugin manager" Mar 08 22:13:51.745048 master-0 kubenswrapper[29458]: I0308 22:13:51.744726 29458 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 22:13:51.745048 master-0 kubenswrapper[29458]: I0308 22:13:51.744755 29458 server.go:79] "Starting device plugin registration server" Mar 08 22:13:51.745755 master-0 kubenswrapper[29458]: I0308 22:13:51.745719 29458 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 22:13:51.745826 master-0 kubenswrapper[29458]: I0308 22:13:51.745753 29458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 22:13:51.746000 master-0 kubenswrapper[29458]: I0308 22:13:51.745940 29458 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 22:13:51.746226 master-0 kubenswrapper[29458]: I0308 22:13:51.746197 29458 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 22:13:51.746226 master-0 kubenswrapper[29458]: I0308 22:13:51.746220 29458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 22:13:51.846039 master-0 kubenswrapper[29458]: I0308 22:13:51.845924 29458 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 22:13:51.849578 master-0 kubenswrapper[29458]: I0308 22:13:51.849532 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 22:13:51.849578 master-0 kubenswrapper[29458]: I0308 22:13:51.849584 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 22:13:51.849682 master-0 kubenswrapper[29458]: I0308 22:13:51.849596 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 22:13:51.849682 master-0 kubenswrapper[29458]: I0308 22:13:51.849665 29458 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 22:13:51.855527 master-0 kubenswrapper[29458]: E0308 22:13:51.855434 29458 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 08 22:13:51.897181 master-0 kubenswrapper[29458]: I0308 22:13:51.897132 29458 apiserver.go:52] "Watching apiserver" Mar 08 22:13:51.934473 master-0 kubenswrapper[29458]: I0308 22:13:51.934409 29458 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 22:13:52.056530 master-0 kubenswrapper[29458]: I0308 22:13:52.056421 29458 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 22:13:52.060116 master-0 kubenswrapper[29458]: I0308 22:13:52.060041 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 22:13:52.060234 master-0 kubenswrapper[29458]: I0308 22:13:52.060139 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 22:13:52.060234 master-0 kubenswrapper[29458]: I0308 22:13:52.060161 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 22:13:52.060385 master-0 kubenswrapper[29458]: I0308 22:13:52.060355 29458 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 22:13:52.065044 master-0 kubenswrapper[29458]: E0308 22:13:52.064922 29458 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 08 22:13:52.476324 master-0 kubenswrapper[29458]: I0308 22:13:52.475713 29458 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 22:13:52.476324 master-0 kubenswrapper[29458]: I0308 22:13:52.475921 29458 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 22:13:52.477612 master-0 kubenswrapper[29458]: I0308 22:13:52.477527 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-l8k5g","openshift-multus/multus-l8ltx","openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh","openshift-marketplace/redhat-operators-8w7wm","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484","openshift-oauth-apiserver/apiserver-6bf768964c-srxfg","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k","openshift-machine-config-operator/machine-config-server-svxwz","openshift-ingress/router-default-79f8cd6fdd-4fsdl","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl","openshift-network-operator/network-operator-7c649bf6d4-znt8q","openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx","openshift-etcd/installer-1-master-0","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm","openshift-marketplace/certified-operators-8ctpt","openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh","openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5","openshift-etcd/etcd-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m","openshift-dns/node-resolver-qdc2p","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws","openshift-service-ca/service-ca-84bfdbbb7f-b8zkz","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6","openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2","openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9","openshift-ovn-kubernetes/ovnkube-node-g4d2r","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2","openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp","assisted-installer/assisted-installer-controller-kxkrl","kube-system/bootstrap-kube-scheduler-master-0","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n","openshift-multus/multus-admission-controller-7769569c45-9lhn8","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk","openshift-ingress-operator/ingress-operator-677db989d6-cjdgr","openshift-config-operator/openshift-config-operator-64488f9d78-krpfs","openshift-dns-operator/dns-operator-589895fbb7-wtvp5","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg","openshift-network-diagnostics/network-check-target-djlff","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm","openshift-cluster-node-tuning-operator/tuned-rxbl5","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr","openshift-kube-apiserver/installer-1-retry-1-master-0","openshift-marketplace/redhat-marketplace-mg95b","openshift-monitoring/metrics-server-f5876b8d7-2222x","openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz","openshift-dns/dns-default-65ts8","openshift-kube-apiserver/installer-1-master-0","openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc","openshift-network-node-identity/network-node-identity-trhtl","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv","openshift-etcd/installer-2-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8","openshift-controller-manager/controller-manager-f7df5f5b-txsrq","openshift-kube-apiserver/installer-2-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf","openshift-marketplace/community-operators-47cmq","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw","openshift-machine-config-operator/machine-config-daemon-q669r","openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg","openshift-multus/multus-additional-cni-plugins-74fmb","openshift-network-operator/iptables-alerter-pwn9k","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr","openshift-insights/insights-operator-8f89dfddd-fn4ck","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/network-metrics-daemon-lqdbv","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg","openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25","openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w","openshift-kube-controller-manager/installer-2-retry-1-master-0","openshift-apiserver/apiserver-6f9445b8fd-w44n6","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf"] Mar 08 22:13:52.480188 master-0 kubenswrapper[29458]: I0308 22:13:52.479407 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kxkrl" Mar 08 22:13:52.484032 master-0 kubenswrapper[29458]: I0308 22:13:52.483995 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 22:13:52.495841 master-0 kubenswrapper[29458]: I0308 22:13:52.492684 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 22:13:52.495841 master-0 kubenswrapper[29458]: I0308 22:13:52.492742 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 22:13:52.495841 master-0 kubenswrapper[29458]: I0308 22:13:52.492918 29458 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 22:13:52.495841 master-0 kubenswrapper[29458]: I0308 22:13:52.490614 29458 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="a03c6eb7-9dc9-47bf-aa52-db1596d56137" Mar 08 22:13:52.495841 master-0 kubenswrapper[29458]: I0308 22:13:52.495785 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 22:13:52.496485 master-0 kubenswrapper[29458]: I0308 22:13:52.487578 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 22:13:52.501092 master-0 kubenswrapper[29458]: I0308 22:13:52.491685 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 22:13:52.501092 master-0 kubenswrapper[29458]: I0308 22:13:52.494139 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 22:13:52.504155 master-0 kubenswrapper[29458]: I0308 22:13:52.504108 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.519105 master-0 kubenswrapper[29458]: I0308 22:13:52.518965 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 22:13:52.519870 master-0 kubenswrapper[29458]: I0308 22:13:52.519417 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 22:13:52.522128 master-0 kubenswrapper[29458]: I0308 22:13:52.521742 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 22:13:52.522470 master-0 kubenswrapper[29458]: I0308 22:13:52.522303 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 22:13:52.522470 master-0 kubenswrapper[29458]: I0308 22:13:52.522327 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.522978 master-0 kubenswrapper[29458]: I0308 22:13:52.522938 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.523793 master-0 kubenswrapper[29458]: I0308 22:13:52.523760 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 22:13:52.524721 master-0 kubenswrapper[29458]: I0308 22:13:52.524688 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 22:13:52.524825 master-0 kubenswrapper[29458]: I0308 22:13:52.524798 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 22:13:52.525060 master-0 kubenswrapper[29458]: I0308 22:13:52.524996 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 22:13:52.525684 master-0 kubenswrapper[29458]: I0308 22:13:52.525471 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 22:13:52.530417 master-0 kubenswrapper[29458]: I0308 22:13:52.530370 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 22:13:52.531188 master-0 kubenswrapper[29458]: I0308 22:13:52.531117 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.531324 master-0 kubenswrapper[29458]: I0308 22:13:52.531286 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 22:13:52.531975 master-0 kubenswrapper[29458]: I0308 22:13:52.531936 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 22:13:52.532213 master-0 kubenswrapper[29458]: I0308 22:13:52.532180 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 22:13:52.532409 master-0 kubenswrapper[29458]: I0308 22:13:52.532375 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.532917 master-0 kubenswrapper[29458]: I0308 22:13:52.532821 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 22:13:52.533365 master-0 kubenswrapper[29458]: I0308 22:13:52.533184 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 22:13:52.534137 master-0 kubenswrapper[29458]: I0308 22:13:52.533835 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.534414 master-0 kubenswrapper[29458]: I0308 22:13:52.534240 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 22:13:52.534414 master-0 kubenswrapper[29458]: I0308 22:13:52.534308 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 22:13:52.534486 master-0 kubenswrapper[29458]: I0308 22:13:52.534415 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 22:13:52.534869 master-0 kubenswrapper[29458]: I0308 22:13:52.534852 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.535043 master-0 kubenswrapper[29458]: I0308 22:13:52.535019 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 22:13:52.535121 master-0 kubenswrapper[29458]: E0308 22:13:52.535062 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.535201 master-0 kubenswrapper[29458]: I0308 22:13:52.535186 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 22:13:52.535329 master-0 kubenswrapper[29458]: I0308 22:13:52.535309 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 22:13:52.535381 master-0 kubenswrapper[29458]: I0308 22:13:52.535339 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 22:13:52.535470 master-0 kubenswrapper[29458]: I0308 22:13:52.535449 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 22:13:52.535546 master-0 kubenswrapper[29458]: I0308 22:13:52.535532 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 22:13:52.535625 master-0 kubenswrapper[29458]: I0308 22:13:52.535595 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:13:52.535682 master-0 kubenswrapper[29458]: I0308 22:13:52.535635 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 22:13:52.535682 master-0 kubenswrapper[29458]: I0308 22:13:52.535650 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 22:13:52.535778 master-0 kubenswrapper[29458]: I0308 22:13:52.535750 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 22:13:52.535910 master-0 kubenswrapper[29458]: I0308 22:13:52.535877 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 22:13:52.535910 master-0 kubenswrapper[29458]: I0308 22:13:52.535900 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 22:13:52.536114 master-0 kubenswrapper[29458]: I0308 22:13:52.536097 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 22:13:52.536343 master-0 kubenswrapper[29458]: I0308 22:13:52.536320 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 22:13:52.537655 master-0 kubenswrapper[29458]: I0308 22:13:52.536099 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 22:13:52.537737 master-0 kubenswrapper[29458]: I0308 22:13:52.537668 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 22:13:52.537737 master-0 kubenswrapper[29458]: I0308 22:13:52.537694 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 22:13:52.537829 master-0 kubenswrapper[29458]: E0308 22:13:52.536990 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.537872 master-0 kubenswrapper[29458]: I0308 22:13:52.536107 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 22:13:52.537919 master-0 kubenswrapper[29458]: I0308 22:13:52.536155 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 22:13:52.538014 master-0 kubenswrapper[29458]: I0308 22:13:52.536605 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 22:13:52.538014 master-0 kubenswrapper[29458]: I0308 22:13:52.536622 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.538176 master-0 kubenswrapper[29458]: I0308 22:13:52.536647 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 22:13:52.538176 master-0 kubenswrapper[29458]: I0308 22:13:52.536679 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 22:13:52.538176 master-0 kubenswrapper[29458]: I0308 22:13:52.536719 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 22:13:52.538352 master-0 kubenswrapper[29458]: I0308 22:13:52.536799 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 22:13:52.538419 master-0 kubenswrapper[29458]: I0308 22:13:52.538309 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.538508 master-0 kubenswrapper[29458]: I0308 22:13:52.536839 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.538591 master-0 kubenswrapper[29458]: I0308 22:13:52.538563 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 22:13:52.538692 master-0 kubenswrapper[29458]: I0308 22:13:52.536881 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:13:52.538763 master-0 kubenswrapper[29458]: I0308 22:13:52.538740 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 22:13:52.538812 master-0 kubenswrapper[29458]: I0308 22:13:52.538771 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 22:13:52.538863 master-0 kubenswrapper[29458]: I0308 22:13:52.536895 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 22:13:52.538907 master-0 kubenswrapper[29458]: I0308 22:13:52.538876 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 22:13:52.538952 master-0 kubenswrapper[29458]: I0308 22:13:52.538940 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 22:13:52.539007 master-0 kubenswrapper[29458]: I0308 22:13:52.536937 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 22:13:52.539055 master-0 kubenswrapper[29458]: I0308 22:13:52.539010 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 22:13:52.539055 master-0 kubenswrapper[29458]: I0308 22:13:52.539016 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 22:13:52.539055 master-0 kubenswrapper[29458]: I0308 22:13:52.539028 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 22:13:52.539055 master-0 kubenswrapper[29458]: I0308 22:13:52.538742 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 22:13:52.539234 master-0 kubenswrapper[29458]: I0308 22:13:52.539173 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 22:13:52.539234 master-0 kubenswrapper[29458]: I0308 22:13:52.537047 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 22:13:52.539234 master-0 kubenswrapper[29458]: I0308 22:13:52.539225 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 22:13:52.539363 master-0 kubenswrapper[29458]: I0308 22:13:52.539347 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 22:13:52.540925 master-0 kubenswrapper[29458]: I0308 22:13:52.537063 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 22:13:52.541308 master-0 kubenswrapper[29458]: I0308 22:13:52.537120 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.541501 master-0 kubenswrapper[29458]: I0308 22:13:52.537175 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 22:13:52.541659 master-0 kubenswrapper[29458]: I0308 22:13:52.537441 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 22:13:52.541772 master-0 kubenswrapper[29458]: I0308 22:13:52.541729 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 22:13:52.541814 master-0 kubenswrapper[29458]: I0308 22:13:52.537453 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 22:13:52.541950 master-0 kubenswrapper[29458]: I0308 22:13:52.537488 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.542136 master-0 kubenswrapper[29458]: I0308 22:13:52.537546 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 22:13:52.542188 master-0 kubenswrapper[29458]: I0308 22:13:52.542171 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 22:13:52.542288 master-0 kubenswrapper[29458]: I0308 22:13:52.537621 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 22:13:52.542381 master-0 kubenswrapper[29458]: I0308 22:13:52.542366 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 22:13:52.542428 master-0 kubenswrapper[29458]: I0308 22:13:52.537841 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 22:13:52.542470 master-0 kubenswrapper[29458]: I0308 22:13:52.539229 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 22:13:52.542574 master-0 kubenswrapper[29458]: I0308 22:13:52.537853 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 22:13:52.542747 master-0 kubenswrapper[29458]: I0308 22:13:52.537120 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 22:13:52.543198 master-0 kubenswrapper[29458]: I0308 22:13:52.543118 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 22:13:52.545575 master-0 kubenswrapper[29458]: I0308 22:13:52.545516 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 22:13:52.546042 master-0 kubenswrapper[29458]: I0308 22:13:52.546012 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 22:13:52.546464 master-0 kubenswrapper[29458]: I0308 22:13:52.546441 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 22:13:52.546694 master-0 kubenswrapper[29458]: I0308 22:13:52.546659 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 22:13:52.547296 master-0 kubenswrapper[29458]: I0308 22:13:52.547253 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 22:13:52.547629 master-0 kubenswrapper[29458]: I0308 22:13:52.547567 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 22:13:52.548319 master-0 kubenswrapper[29458]: I0308 22:13:52.548280 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 22:13:52.548590 master-0 kubenswrapper[29458]: I0308 22:13:52.548559 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 22:13:52.548699 master-0 kubenswrapper[29458]: I0308 22:13:52.548665 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 22:13:52.548846 master-0 kubenswrapper[29458]: I0308 22:13:52.548816 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 22:13:52.548969 master-0 kubenswrapper[29458]: I0308 22:13:52.548852 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerDied","Data":"bbaef61fb3881295b80f5476ce40c1eeb152f4f8c17f1203f7df159cc62e41fb"} Mar 08 22:13:52.549030 master-0 kubenswrapper[29458]: I0308 22:13:52.548974 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"5166b178c19287374a46a00ef88c5dfe4724a44440d45b1e58c811dacd606607"} Mar 08 22:13:52.549030 master-0 kubenswrapper[29458]: I0308 22:13:52.548990 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" event={"ID":"2a91f36f-900e-4b99-9be1-dfc61d8e31d9","Type":"ContainerStarted","Data":"d186c173d59660d4939673a18315486c8567701538340aa7cd6b89f06bbf1013"} Mar 08 22:13:52.549030 master-0 kubenswrapper[29458]: I0308 22:13:52.549011 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"fc58edc3bf36ea26582cbc3848716e910d5b68321e838b246c7ee1964f56327e"} Mar 08 22:13:52.549030 master-0 kubenswrapper[29458]: I0308 22:13:52.549024 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerDied","Data":"d28b9b684de2ee6afb8af986b004969105b39b6920f35f943824b725390ab335"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549040 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549057 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"48f4e5c75e011ab844af8ce6a62930e7aa5da5ffcb65fe585956c029c491a0cc"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549084 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" event={"ID":"d9fe466f-5a23-4f69-8a96-44bd5d6194f5","Type":"ContainerStarted","Data":"3ad163e6ddc790c3a3e14754fccc71ed19c06b28b075ab51e8c743f3e036d876"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549097 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"0c8209add6ea0d058f261c8dd869620ca936a0ffd0bcfd90c4fa209b2d884ec7"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549110 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerDied","Data":"a4567b8a512f6afc2a33af0577da173a511b7ea0b98b67a3e548c26a0e448321"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549123 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"60d3d202a39452d626dd6317c7caf06c5f21b7e1a289e0984f94bd5f6ec57f48"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549135 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" event={"ID":"077643a2-ab2d-4f12-9abf-42a1c56d7aff","Type":"ContainerStarted","Data":"be53893516c99fbabb0efb0e7767df7d102aeacc1fd8341cd8ee128754131110"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549226 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"a83c2447e70960fbdfe950dd6467011dacf3bf1df2039d80bb85ed744ae22114"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549243 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"3f3a585720cb97b60eb8cbfaa667ccc12e6f29874fd7d55b67a47aea9a291100"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549256 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"01b662ddcad9510543c9dcc9932df7768b979fc609e31541baad3f6e71c738be"} Mar 08 22:13:52.549272 master-0 kubenswrapper[29458]: I0308 22:13:52.549268 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"e9c1e20ed9bedb939865dac300c7958ce6d0193156b71a6754079e06a20f4c89"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549284 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"3b192f8314031425fe5254e1d012f49629ef523f84bd3270e86d481cd6843fc0"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549295 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"a23a3557d33fe6a5a9e6280202be1cb13261d5f9b76e81ae2f08a8aac1599e14"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549307 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"df48fc1dc00a53360dd1855fc01fcb1f1e56dd89236b218193c7e65caf253098"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549319 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"e7868786f9174536b33680c3c4367751fff82b1f36f4e75683e156d299417e58"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549332 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerDied","Data":"9c0dad4facbead9173c18e63c1454c1d466a90a1041e6859864e005008acb001"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549349 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" event={"ID":"1232f59f-4e6a-46ef-8bec-1bd4e04956ef","Type":"ContainerStarted","Data":"203faae43e1f3878693db94445a48431e9bbc8eef7fc425b238b62ca3a64799d"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549362 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" event={"ID":"0d0feb73-2ef6-4083-81ce-82a1394ce9c4","Type":"ContainerStarted","Data":"7f5513a7ffe922d5291ba08489744871d2c54bef0e5d4ccf76a9ea9b9fb96ca1"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549377 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" event={"ID":"0d0feb73-2ef6-4083-81ce-82a1394ce9c4","Type":"ContainerStarted","Data":"6bd6078c00ce19f9ca7d9c5af9e05dbf9ff45aa8af12f0b8ff8b3ca02782674f"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549391 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" event={"ID":"0d0feb73-2ef6-4083-81ce-82a1394ce9c4","Type":"ContainerStarted","Data":"03e24173b288bd97ec848e0cf7a888e3b1e752701cc2a0adfe31f0bbf45fd669"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549431 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549436 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerStarted","Data":"64584e728966a4dc7f37960670b69b7def067398cf4f7ec06561a12640ec5ee2"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549455 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"db4187056969875e15e546fde8b086c9df68d0dfd1ba3b2a7d33cdf8f2598f9a"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549472 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"ba570d5274abc3eff808a6feca603573aedab7307cfb102965df1c84daee657a"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549486 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"719c0f1133120f686febe97b7386aa26236fdb7648305df23056b3e40ec22875"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549499 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"332b44c02955cc191872da4d797a1cc566a290dcc3b5e3b8b9e49f2a86f283e8"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549513 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"8100187bff84fd39b1869b62c92c77062e916e1f9e3462572f5572d1caef3b83"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549531 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549566 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerDied","Data":"1de5c137bbb7c8c06869f9101463a33e4cb94c8693913396854f5dedf16bf314"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549586 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74fmb" event={"ID":"96a67acb-9cc6-4793-b99a-01479b239d76","Type":"ContainerStarted","Data":"be3668c70e364cf34d2eac1fb81dcded6ee681b77814828fbe8aa2c73f270566"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549601 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"ec7250269822a93c50f1982f4d31a397949dd9bb5b4f057769f6310cd009ff62"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549619 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerDied","Data":"f6e2611dc907c17bbce51678676042badff55c1b3f801a765e588a3f1a01f63e"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549633 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"93b37166e7a76abfca6ddb5300495d48bbcbeedf6828ba2c36f322ef2fec8592"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549646 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" event={"ID":"d9e9c931-9595-42f1-bbc2-c412286f6cd1","Type":"ContainerStarted","Data":"3115bea19c7db25d70ce89d976323f96371d246725faa8269d586e44afe79c19"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549658 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"4737cebe7d8ef9fb43685e29dfbcfcf0ed12bbe9a9a485e2c6139850112daf4d"} Mar 08 22:13:52.549694 master-0 kubenswrapper[29458]: I0308 22:13:52.549675 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerDied","Data":"31335e26ca3ad44282a26b6dd2ea0c331b70b964fab198517349f95631c93a18"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549691 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"9164ea1a943910cf9b8dc2033e053c71543704a60a430439fd1cb5398e260074"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549864 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" event={"ID":"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed","Type":"ContainerStarted","Data":"9d2b94760fb5bd6c1ac833545141ede88958ba2ac4b1af0ff830a401107ab2f9"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549878 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerStarted","Data":"8216fde810a532dbe5b20008442fb45b7d08d72c9153e2e3074fd8899261a6e8"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549891 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerDied","Data":"45472acd22cf9f28bd94833449b2d75f0a3377af69685e85fac8637f3aa96e29"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549914 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerDied","Data":"b4991335150a6ed2fd7eec9480c2030f976e4351bd9e24d23f766eaa04158aae"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549927 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cmq" event={"ID":"89619d97-2c16-4e76-ba80-8b519f6a9366","Type":"ContainerStarted","Data":"44c8fec7b12dde9268d1d824a4d97116a83214d9f8983f61af194a3fa9aecae7"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549938 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerStarted","Data":"4ef317f319328b940bdd7b199470ed552b6c6819f550cb5e444b775b8545e6b6"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549953 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerDied","Data":"cf1d608cd8e4a27484068f303828c57cd8c70b10159e81ee0191eb215e9cb4eb"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549970 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerDied","Data":"4562b61799ee566a79cea44db886dae16855feb38419004f25ad733f55567059"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549984 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg95b" event={"ID":"4eec590b-c536-4b16-a664-81bc3c74eef5","Type":"ContainerStarted","Data":"f9ba7cd773b843371b8f8c24e533c22a9486952b2bc08a7f9b3ad3ee69e3c968"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.549998 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c633355a-b323-4458-8ecb-1e490d115f59","Type":"ContainerDied","Data":"28682516e11b7da515d28696337779453c2c96bd4cf9bfd8a8b3aa00aef7307b"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.550012 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c633355a-b323-4458-8ecb-1e490d115f59","Type":"ContainerDied","Data":"1d3dcf055543df28f3482d4eda49126cfdf056d4ebfa04ae9c5c2b3c8a2fd988"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.550025 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3dcf055543df28f3482d4eda49126cfdf056d4ebfa04ae9c5c2b3c8a2fd988" Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.550037 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.550049 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7"} Mar 08 22:13:52.550761 master-0 kubenswrapper[29458]: I0308 22:13:52.550062 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.550872 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551281 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551304 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerDied","Data":"d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551324 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"318c84ebaf730c7c85b63db579f8af63f5545b50f015236d0cbd1a16b9495c4d"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551346 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerStarted","Data":"90b58b468745baab88972adca763ee9422b634b7fff248cdd5da328fd7ce916d"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551364 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerDied","Data":"c1e691e59e7c1bed851b1abd3631d646daa0cf480534e0faeca027a9151c11dc"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551385 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerDied","Data":"524292da38fe899d291d24e77e4f5efb26dbdfacb31c02270a11c8d9d08d5284"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551417 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerDied","Data":"72b0e6a3cc3f97f5e2663934796c3814c98efd81ba66b9d9762bd04c86de3111"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551436 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" event={"ID":"de89c423-0f2a-440f-9fa9-92fefea84b09","Type":"ContainerStarted","Data":"6a34c2634ae54a66cec214aefe9bf2e49ebc56d1b92acdc88a8676a1ce3196bd"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551456 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerStarted","Data":"2276ccb6b0f5fd08f5e56e3b902e8a6182b2a12013f6e0c332a45427339723ee"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551473 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerStarted","Data":"9da1b27d0d2a56f2d1836cb9a7ce90ff6ce0283a3fbf3cce14a836de8ec2bd26"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551488 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerDied","Data":"25dcfb26438ac1a8e2908fd8e10cac8fb870f8887f8afa80fca87f762351557e"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.551607 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.552105 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" event={"ID":"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a","Type":"ContainerStarted","Data":"6b55e765e348290b71a16cee0db7116808a6250e19b441558bfccabf4cfbc9d8"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.552129 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"ee0b93ec-6ea0-4704-9449-57781a482ce4","Type":"ContainerDied","Data":"c38d9f8500098eb10c48b40a07d5d0aefa68c69ce87a29f847a74bc382b44913"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.552166 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"ee0b93ec-6ea0-4704-9449-57781a482ce4","Type":"ContainerDied","Data":"2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c"} Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.552186 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2845903e096222c96f510db45f4f9c79a71bfc1e7049da80e97dc3bb6436df6c" Mar 08 22:13:52.552180 master-0 kubenswrapper[29458]: I0308 22:13:52.552201 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" event={"ID":"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e","Type":"ContainerStarted","Data":"c9f45339dc296c60cee9cd8facd74fa45cd8d922e460c120ae31130a8da944c9"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552218 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" event={"ID":"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e","Type":"ContainerStarted","Data":"e67705a9ff72460926d3738d4c71ca542e923f9e2d5919412750e64a1d0ce8cf"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552233 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" event={"ID":"4ef806a4-5486-43a9-8bfa-b1670c888dc1","Type":"ContainerStarted","Data":"4342a61fe3f90cd7b16242cf101e42393f0a324541ef3f468a990da5fedcc62f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552249 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" event={"ID":"4ef806a4-5486-43a9-8bfa-b1670c888dc1","Type":"ContainerStarted","Data":"53b5043fd325310586d0ad90805405242c17d1ce6d248bad4d8308d740dacd52"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552262 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"713d5bb870be4b517e2a3b6934cbc3a8dbb4fb996bc551e64dbb0c038eff7f98"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552285 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"15c38815310dffefa782d7e3b86b468eadf91008125f12d833ccabdf6a47990b"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552298 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"24468252b1016ecbfc6fabcc842f03b85cc1d8d62ad0492983e2d43991a2cb4a"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552311 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"045d96fc5260120205fd3f9cca2039678cbcc24c6c931c6bbf3f1ba418756e6c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552325 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"270111bd9a880fa859abff7a300a5a42546d0f86314f375208a892a811a648e7"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552338 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" event={"ID":"83b5f0b6-adee-4820-8212-b4d182b178d2","Type":"ContainerStarted","Data":"ba2aacb0c56514dfd295769df8f772a329a5770387b5ffe2e5f133aa557b52d6"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552353 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" event={"ID":"83b5f0b6-adee-4820-8212-b4d182b178d2","Type":"ContainerStarted","Data":"1760bfc2a8a6cbf8ae227ef4de6bfa43714b1849e66a5382da34146e555ddd0f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552366 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1d188983-1f19-4c8e-b604-034bd6308139","Type":"ContainerDied","Data":"457fd83835c6efbf11a60689076f6b36dc5b753b2b41e47858b503eb7cab62fc"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552385 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"1d188983-1f19-4c8e-b604-034bd6308139","Type":"ContainerDied","Data":"f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552397 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31d8d53c8b0a414548414159bd2f7308b0afe83a8791eaea5070e54129415ad" Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552410 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerDied","Data":"2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552426 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"5e6100d027b85834b0f36e6902f07cf9a882faac96d2f9348fa6d8cef4d4f07c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552441 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerStarted","Data":"5abd06ba0394acf60c173784ce356bd55de0949b044321cf96ab684d6d56e529"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552455 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerDied","Data":"ba63e07913394038e6214607c806df6fc81079644bc68ca5910ad463422e98db"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552477 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerDied","Data":"0a07d531f2a5fce4c32633615b34d340e2c1873fb062556ca27529a7a07f33ff"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552490 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" event={"ID":"d0641333-feda-44c5-baf5-ceee4ce3fd8f","Type":"ContainerStarted","Data":"503b7b6ea77465c9cbfc84fe62fda0b7b8ad6a8d2fd54128890065de069b7f20"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552505 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerStarted","Data":"7c4256342f8aa60d3135288746ca7cb2610fe20800104f7ef53e7de2bba69b10"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552568 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerDied","Data":"8a52489302a5dc96ab51b546dab29cb1d4fff7df453456bacfb9302f4b296bd5"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552585 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" event={"ID":"b849f992-1020-4633-98be-75705b962fa9","Type":"ContainerStarted","Data":"60db7aa4fe5c30fe7cef3df3e7aab11e1bd2cef81e0f4a40f64a350ab51de986"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552598 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerStarted","Data":"70db3f8570e6da2164b211258ae4e0d90fa0917b0d814ee5c4b2fc4c910cafda"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552611 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerDied","Data":"1e3bba86fc611770354755d87c02e967df54a626a16a1218a0b91a1d1f5b23e2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552634 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerDied","Data":"17354f9a78986dd3c8de787a809b49886d6ee3c4cad78116a2e66e3dae4db975"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552648 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w7wm" event={"ID":"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b","Type":"ContainerStarted","Data":"46be7c8523987b3cf18afb32c173f063834fd54504cd12311bd2eab02b35bc4d"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552661 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerStarted","Data":"e95b6b2af3d8666d9ed99fb1c58eb920d15415a3e67c3b59c97608b0cd789d62"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552675 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerDied","Data":"d9ffb5341e8b8d84c9e35bd2c9065a3beacd71fe2f5c3020b9ea1e20dc28e517"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552689 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerDied","Data":"da72619d44af489aac6baf5a28a18d7d685dca71b43deb1db98d79497a18fa19"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552701 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ctpt" event={"ID":"b1207b6b-0517-46eb-9953-737f2bf1040d","Type":"ContainerStarted","Data":"3bc807693a5d4854df8f60d3cc1c2f6bf083291e98e017340995c3d3b0e2bf81"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552721 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerStarted","Data":"22d88096d73da9ad2e8592e7ffa3873cc4df75c1bfa38aab96c0c93456cc6b9f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552735 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerDied","Data":"f871c547308cba5a44237c75ff4479c8163cef5b1e2a7ff5964a521c14faec67"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552748 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" event={"ID":"04fb7bdb-fb5a-4187-94a3-67c8f09684ed","Type":"ContainerStarted","Data":"6798958131d9b6122a924f582d5cf236ae0ff108ba6efd07ed21d07002d8eda4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552763 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q669r" event={"ID":"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3","Type":"ContainerStarted","Data":"a4b49acdc17f72dccdea435d19b95ddc086fac3671e588788c4c65e2f7e9dc9b"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552895 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q669r" event={"ID":"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3","Type":"ContainerStarted","Data":"4c8a0efa9298dfa9e5a85238c8444d06b35c3a684b882cab8d59cc5684624441"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552911 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q669r" event={"ID":"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3","Type":"ContainerStarted","Data":"2e34987c76ae3161515e58a685409125bb3c2f2c0b1e13425d28a3f54cc0d97c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552930 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerStarted","Data":"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552943 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerDied","Data":"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552960 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerStarted","Data":"49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552974 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"76be1b9b9ad48798fd90927a0411e2ee8004152f03a23869518cd0c790a9c13f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.552989 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerDied","Data":"b17d02ce220cb7f77b9b97b6a5543cd3f92bedd3e7c85706528fb89c8a16b4f5"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553004 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"9383b71d5d3cd947ccf24cbb393c63b89674ed85bec2d2f62c05a8b0707848a8"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553016 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" event={"ID":"081acedd-4c88-461f-80f3-e80fdbadb725","Type":"ContainerStarted","Data":"6b1e7aff193baf892eff0d308b2df2d4df9815b9047c9a97600c2e10f5583a8b"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553033 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"0c6b4b7c21dd8a4b138e3030b88605eb5d06a2cb377b0b36526cac511abff49c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553051 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerDied","Data":"566f64e1e5f69c2bf95c8075567ff0feb7dd0877a1f2fce23e6ae2446c0dbdb2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553067 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"294cff59d7c8d4cc43ab7857ed109621d4b5b6fd360227fbee62b81817851711"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553102 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" event={"ID":"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8","Type":"ContainerStarted","Data":"9d44f96a87d3e5a63998ef47058bf56c18f9a51e485b6d530baa6ae3a9c72e79"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553114 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" event={"ID":"00db426a-15d4-4737-a85e-b4cf6362c759","Type":"ContainerStarted","Data":"b3b5ab2b0d8d50e18ad35cade1f6c161c02a82cb4cde7ef485b681883ca98cec"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553129 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" event={"ID":"00db426a-15d4-4737-a85e-b4cf6362c759","Type":"ContainerStarted","Data":"20d694fb7dfac0a25e84f67b4332f4f50bd881d205956ffffe007db0387183da"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553150 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" event={"ID":"00db426a-15d4-4737-a85e-b4cf6362c759","Type":"ContainerStarted","Data":"67cd73a40904f0f9ea787ff881d2a840cf10744bf89845b00e5d994f7ee5b67d"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553165 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"5a90a446-01fc-4032-9d02-d82e25084ea9","Type":"ContainerDied","Data":"3eb560de291b5a27e85796d034a6bc8bf292b3b1a9fe462699eef23cc0bb8a73"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553182 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"5a90a446-01fc-4032-9d02-d82e25084ea9","Type":"ContainerDied","Data":"9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553194 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f59e2b32d6bb3b93d7fd47687e65c1a832f20441aaa4a265c3bd462b3ab818c" Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553206 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerStarted","Data":"5edd2120046a6dae48461fa9d5e7e465dc05c369838a5b6f5ef7b51b87e3796a"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553220 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerDied","Data":"9e2fd1210b8809e9723f044551eadfefcc58034be22d2af001446424e236d937"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553236 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" event={"ID":"f6fbc12f-3c27-4a7a-933f-43a55c960335","Type":"ContainerStarted","Data":"e1a74bb495c9d9aab308272824975d3fa3476be254ef7c02bd62f9151f2ab266"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553256 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerStarted","Data":"1788c7772d1b5e51ce597b55bb6c08ca4fa7375d57a8cc22127f6515a7008256"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553271 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerDied","Data":"e72afc2085d471295428d0c6e91b91b2d9a4e2a26d7688d062fbd6d0d26453eb"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553286 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" event={"ID":"a8e00c74-fb72-4e3d-a22c-c38a4772a813","Type":"ContainerStarted","Data":"dc168342b2accc24dd805b536a42a0f0ef9ceaae1895f17c33c4e06a0c3e9184"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553300 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"c19e41ea10eeb91865413a7a2a10341b501fd30a392251483cdaa631d3ce1ad4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553314 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerDied","Data":"337d76d1f849217e44f712b0d4de222e21178a127e60c214aafe729c50460441"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553327 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"57c8aa9b18c347fc77bfc02f5a09149b7844bf09403e274ce81dbd6022c67d26"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553347 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" event={"ID":"be431b74-1116-4b0f-8b25-bbb0408411b0","Type":"ContainerStarted","Data":"409ed7dd551984c65c75de609cd08ca919d308e8d542269375ed00b6340ac461"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553360 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerStarted","Data":"55dfb1273df17a71c2face3f2f9b2be8a5c23f1ce2993ebf2043ceaa5c122430"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553374 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerDied","Data":"3724b6db595f74186edc6baea18527f6eae9fe894eef0ca447fc3a5e5c129bfc"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553389 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" event={"ID":"e8ef68b9-6f8d-4697-b269-91ee4e310752","Type":"ContainerStarted","Data":"65b211739156dcea6c9fedd48dbe1e6cb8361762b8f9a787cf0192fa0b5059a7"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553404 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"f84e2a09ee0c2b94b3a029e14eeb278827a7b20e5cab6340015020baa528a8ed"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553422 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"636096a563f9790ad280be64875e151f0e3aea218ca6c330e59deb5dc7006700"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553434 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"01e7e6db40b352d1bb5e058f335eb116c496e54948df30ad1e0dec47816a596f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553454 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" event={"ID":"c377685c-2024-4ef7-932d-5858eeb0d9bd","Type":"ContainerStarted","Data":"dcce2795ffc43a6cd86e6b9ec76eb643d8b1c22dbdc50b3b5ab3767ff2108c08"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553467 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"acbb43bf2cf27ed60d1f635fd6638ac7","Type":"ContainerStarted","Data":"fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553481 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"acbb43bf2cf27ed60d1f635fd6638ac7","Type":"ContainerStarted","Data":"7657ee7fb6569f1c4ef325644eaa107755f9e16754fbff803dce351304de134f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553494 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerStarted","Data":"552f289d3f2573263f7433542ba0f3e3e1e112be831b69c090b0709f1ab05697"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553511 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerDied","Data":"876653e3eaf25a649c1577e2202b14fc9e4231bce10bcb04ae36299b1eb1609e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553527 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" event={"ID":"971ffa86-4d52-4dc3-ba28-03d116ec3494","Type":"ContainerStarted","Data":"427fdbe110b0876dd13174b0756ac4196ec70da6181541067d85f985ac05aca4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553546 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kxkrl" event={"ID":"0a43561f-bdde-456b-b4a4-2055d4fe6880","Type":"ContainerDied","Data":"075540abc9ccd6697e1ff04ade4d337fce9916d26b47b35e3ef665f65e8db6d7"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553560 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kxkrl" event={"ID":"0a43561f-bdde-456b-b4a4-2055d4fe6880","Type":"ContainerDied","Data":"996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553570 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996a90111a18f993b31c6404a8133e717c780ce0cf180dace60851f053db5034" Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553582 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"2c31cb3fb4a5626349fa3efde605472409d0006c56bde3665977151422412956"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553595 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"5bbd0df97183d8637c0e656471f38367a5ad7905a4855ed56a03e62c7164dbdd"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553610 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerDied","Data":"6ed8d9b29a081602db7df52fa208e1ced8636f34e50cd9dbcb9d6a6d48cd183e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553626 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" event={"ID":"3e38e989-41b8-4c80-99fb-8d414dda5da1","Type":"ContainerStarted","Data":"1d036d34fc0a96523a8a522c774101e6f8bb0dc6fc53b1cd8cbadc061d7fc1f7"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553645 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lqdbv" event={"ID":"44e67e41-045e-42ef-8f60-6ef15606d6a2","Type":"ContainerStarted","Data":"df5b0088e640f400af20d24a7b6f80fb2cd20c3d0136567239df8b0010e7bdef"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553663 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lqdbv" event={"ID":"44e67e41-045e-42ef-8f60-6ef15606d6a2","Type":"ContainerStarted","Data":"5f33344d5680163a9b22b7300b7c2175a35231534f35082c09b01e820a94217d"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553676 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lqdbv" event={"ID":"44e67e41-045e-42ef-8f60-6ef15606d6a2","Type":"ContainerStarted","Data":"0de0dd88c4bba9f852c91550e6622cdfe9b4a30a405c23edc2a915817b573fec"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553688 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-pwn9k" event={"ID":"b358dcb7-d01f-4206-b636-b55a599a73bd","Type":"ContainerStarted","Data":"4c93513e2411671b591d80db5767b0a883ed647283a5daee6cc24464557c94b7"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553702 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-pwn9k" event={"ID":"b358dcb7-d01f-4206-b636-b55a599a73bd","Type":"ContainerStarted","Data":"2f7507c2d466367da3bbc24168dc98c7fc99ef0ee4b7823db51ec09616db7efe"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553715 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"b62c2f59b7d3966761efe831860376676122986f3507dcafd946e48612f86ef4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553736 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"138d5d8619c73c03811c136abc660b710e532f1202c13d7d1602e706a526f68e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553750 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"39b49a99ba062a390ef6b5e55d7a6330fbf856db4c4f7d6e5517d23a5e71b49d"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553765 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" event={"ID":"ecb3134a-ff4f-4069-8817-010b400296f6","Type":"ContainerStarted","Data":"e457f58882ed9a2cc2bdb7c9bf8dd928c9031f07753ed065fd3a502525f26699"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553783 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svxwz" event={"ID":"4b5246dc-b715-4678-a3a9-878df57dd236","Type":"ContainerStarted","Data":"8622091cf260a9c109c08c1a2cfc7b6b626d8462a700065181f25b83cce99b0c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553799 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svxwz" event={"ID":"4b5246dc-b715-4678-a3a9-878df57dd236","Type":"ContainerStarted","Data":"44048b3590f244e6e1938c80ea9293e108819fbabf668d1d67a4241c09d483ab"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553812 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"e78de91412bc1e77f8bd1aa7528f80d543f00633d1f8f9abc82a7124a38b7306"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553825 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerDied","Data":"6c17da4a9a78c97b020ed2b0ce3db78d69c06f2bc4329c8df6a1559c497aade3"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553845 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"d0965a7df17209c3214572f918df6f641eebcced99935a1fa23fd422d4732080"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553859 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-trhtl" event={"ID":"dfe625a1-5ba4-491f-9ab3-5d91154961a0","Type":"ContainerStarted","Data":"600acdcc91505be515a6dc9bb9d4094d13c856320148b4b0d5cc2092598749f2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553872 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerStarted","Data":"12e54b9f7ad60e17db8491becafc0de706219d683bbc5ce439f564e679c5111e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553886 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerDied","Data":"9a488623b815fc824bec74857e2960fc417072b53ab920bd8c886dd1a94fa5ae"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553905 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" event={"ID":"2851c096-f5cb-4a46-a5a0-ac0b1341033b","Type":"ContainerStarted","Data":"d2fca6e62ae89a98bc2678ca1c4514d3b2efd7621615252b3640dae5aca8db7e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553917 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" event={"ID":"df48e7e0-0659-48e2-9b6a-32c964ff47b2","Type":"ContainerStarted","Data":"e3a3f13da6709438b132d9eca172683a5c6defc158c9c31ccc673ac74fd4d281"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553938 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" event={"ID":"df48e7e0-0659-48e2-9b6a-32c964ff47b2","Type":"ContainerStarted","Data":"d5596dd51e8955a57e6a69ba7f458a212f6bf75496f2cc7496253f96efcdeccc"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553953 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" event={"ID":"df48e7e0-0659-48e2-9b6a-32c964ff47b2","Type":"ContainerStarted","Data":"de7e09860c85ea273caa21fdbfda6d2e559117a5f7a6df3707305d264e29d687"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553966 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8f9a1ffa-fdef-4201-81a9-35b944f8c193","Type":"ContainerDied","Data":"8b1f61f93e111d7a59ff7b3af6ad621f3547dafb0a9264256b214c4d46121676"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553980 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"8f9a1ffa-fdef-4201-81a9-35b944f8c193","Type":"ContainerDied","Data":"b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.553992 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10a7439b4f05569de6ee0e41f25c0e406a481406829e6ce9ab87733d5ae443c" Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554003 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerStarted","Data":"b7995f2ddd717f62af994a3ce59a3ae7eb1ed5874ee99ffa525ec7853fd36239"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554016 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerDied","Data":"539c0747d69e37b439f9d78ced15438e6d882433e87666140b9b0adafe3b7125"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554036 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" event={"ID":"0d851f97-b21e-432e-a4c3-dc0a8ff00e84","Type":"ContainerStarted","Data":"44b935a06c24e92b8520f103f003c519a8f99b22186edfd342cc9323faa0eca5"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554052 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"7603d2fd881e136012bf1afe42b31760a7ed92da49a974810eb9109c6a3ab95a"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554068 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"0871d5393b2287077e78ea4cabbc123965065d582cc608c8130a11e8d227ebf0"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554112 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerDied","Data":"ea9d698fbce1d205747d5157a6c744e1ac0246ad5c16718bbe3cc568d31c44f2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554128 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" event={"ID":"b6bc6f78-2c5c-4add-925f-f6568a49c2cc","Type":"ContainerStarted","Data":"ec5f0a537ae65684298a1a4ad3696c2f1fea1eefa39c8057ddfd9d3609fd93bf"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554141 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerStarted","Data":"c3d7bacea0e8378e98be2730d885890f020b45654e8e5010663e807c1cff3ed0"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554162 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerDied","Data":"8bf41d7f7f99e2d4fdb83a25a837511d4994d2551b185499c8662f2b6ce0defe"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554181 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" event={"ID":"a913c639-ebfc-42a3-85cd-8a460027d3ec","Type":"ContainerStarted","Data":"d06c21917a01888be55a284a4198557df93616f6e6b788240f364df6bfb82d3a"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554195 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-djlff" event={"ID":"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e","Type":"ContainerStarted","Data":"2035cde02874bda71dfa2e89042a27ebe4c62587d22d2cbeee64782d9acfe89b"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554210 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-djlff" event={"ID":"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e","Type":"ContainerStarted","Data":"1794b122d487b56235f5a9e6effbe7f1e37c18fe47d01e1c40b8a77c4e74da16"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554225 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"b536c467412a6f6e6bc5ac41305e5f93a486d6612aa6809a3738ce81cc84c7e4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554239 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerDied","Data":"8c5935d4c8ced0d1522d2fa823597581df0f0db73a8f0870aa81ef671ab128d8"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554254 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"77814812894cae312166fb4b1d60568f421a6441a0acb548490be9a3f80f4c65"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554273 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" event={"ID":"1ef14467-bb62-462d-9dec-dee43e4cc9bd","Type":"ContainerStarted","Data":"a3c825039f429bbbe3e7e27ef1491ff9c435ad7f4d68ed1d1f7b0b138f9a2544"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554286 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" event={"ID":"10e2e81b-cd18-4e30-b8ad-4cf105daea4a","Type":"ContainerStarted","Data":"292b7794be112451b21f81dda371f9e3caaf1ae93aa6bd4111a752df3e06bcb2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554300 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" event={"ID":"10e2e81b-cd18-4e30-b8ad-4cf105daea4a","Type":"ContainerStarted","Data":"0c50be0fc3f4780032df6f771d4507e5bf45df79f6025c39b105620c89303b83"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554318 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerStarted","Data":"654c0aeae113f0702dd86ff44c39f979b6a8a5065ae564574d931f95b93f01c2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554335 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerDied","Data":"04944f14b53d02d121f70fd7c26fd29d16bc18bb4704e5d81fc7ee613027b6bb"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554352 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" event={"ID":"37bf82cb-adea-46d3-a899-136eb1d1f292","Type":"ContainerStarted","Data":"362c3b514579828187f546dc53101831b423b473ee9512ccb87f2423ae6040c3"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554371 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerStarted","Data":"f1e726f349106fd18bed1f94f7bc60cc539fff615238bcc5c5225950b7dde44b"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554386 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerStarted","Data":"1852718559e4b6931ea40cd553a1b60dcc84f807d1f0a24bae4dc5ddc83f7474"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554399 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerDied","Data":"c9cab6e5817c1932a6f2978d3ea0dfca3946b25467cd7fa690d906acf2f08a77"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554414 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l8k5g" event={"ID":"0269ed52-a753-49aa-9c38-c7aee23cebbd","Type":"ContainerStarted","Data":"dcc02028369ad7e36bc57efbe75d5305967f85a4b9666ef43d90eeaacc2b3f3e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554427 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerStarted","Data":"aad05a87d233cdf378ab6db7c4437a4abb7ff79cc2a7f29656bb2dfe1e7561c4"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554441 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerDied","Data":"9d57fc4d1e08b9fa4f826dec76d98ab4964d370b21a4f1f3de9ac2217b28ef10"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554461 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" event={"ID":"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad","Type":"ContainerStarted","Data":"a5f486dd57f083148217b384b5e4b7e4ee2cd439fe07291b198c3cd32fbe85ef"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554479 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"e3a61e0f18998d1659f1848d9ff8c4de1817df1723214bfa069260c375e7739f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554492 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554509 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554523 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"7387a6e6e266fb7b7bd4761c192fb5472805d3bd3d892de94f1b2578384080b7"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554538 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerDied","Data":"4c252b52dc72b4cf9a688685e68fed111ec3680baa86d43719d7d70d42220e79"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554552 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"04ea7cefcb78239f13efed84a01c73c9c7a659eaa2abd9abb2c9410ed57bcc52"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554573 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" event={"ID":"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f","Type":"ContainerStarted","Data":"c5f350fe49a4dbfc3234a2ef7026b555f76884632095fc5a87ca7626e176aff9"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554588 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerStarted","Data":"2982b8e7f0b4c02167f15f7a02deda31e69764d7a2b76b9065023bb494fe82f3"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554601 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerDied","Data":"85d980d0ad1f366d812777a55826b75d7182615f3739f55dd1c63103d4d0380c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554616 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" event={"ID":"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657","Type":"ContainerStarted","Data":"e5a5d91cfd17574435ef488a30976925f613e8868e1af9e7f86a003675b330e2"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554657 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qdc2p" event={"ID":"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0","Type":"ContainerStarted","Data":"47f807e9d5285fce2274947f7a4eb45b2a4ed3581af2b6bd9b5fbd35c5540072"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554673 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qdc2p" event={"ID":"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0","Type":"ContainerStarted","Data":"b3c99d21b340bbb5b5d81e3b9c44c2f6826d5e892f5141960667fbe827f38f5e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554694 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84ecfe0cb715c9b7fdf6ae6c02c8d335c1023b605928a05b4d08849816a5d3c" Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554708 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f0e851e2-74fc-4f4c-b907-3c9158c59cd4","Type":"ContainerDied","Data":"5ccbb8ad117a453ccde6adce287311d7e602ee66003c156725015647e77006f5"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554723 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f0e851e2-74fc-4f4c-b907-3c9158c59cd4","Type":"ContainerDied","Data":"7806b893b20c55d1f8afd2a7c71328b4f99e83bbf86148341ea260ee8e9271b9"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554735 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7806b893b20c55d1f8afd2a7c71328b4f99e83bbf86148341ea260ee8e9271b9" Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554748 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerStarted","Data":"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554761 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerDied","Data":"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554774 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerStarted","Data":"556cd17b0dd9a0437b38f51d3f691ed442f4e900ac26991a4d6a0e87a7a93e20"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554787 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"4f22802100112023432a8b6ca7c77bb2fc7239f09a3e7d345080a8cf8e397b1e"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554807 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"94d3b7e3742a7d28fa13f4530eb256cdd591ddfdf571150f5be4ed1fc2b06bd6"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554819 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"896ca1240864c042686b8d27bbaf6b98e7018c7035e4ce4b54e7fc7e2545eda3"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554831 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5c13cc724ceb8a47022a4b506a02a4ffa2182349375d59b27a103a3a379a347a"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554842 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"41ff2358902de9820af0e57b5654a5dd5662e57ab1942e9aa3f97784ba7580d9"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554854 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"d96629c1f566486e43c8e0582e2c2eba47afa3a936c512881f234861d282525c"} Mar 08 22:13:52.554717 master-0 kubenswrapper[29458]: I0308 22:13:52.554867 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"048081af0d4f2d7c89ebdb9c25d0b6b144830ec123396e7ecad6567e008c8334"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554883 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"9b3f703e2b5dc4f53836c052b0708a079abf7ba89e449465ae68fb01236cf52d"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554894 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"7e4394146a2df2b894fc7124d9eec1bf24b8531e0bd0dd7d435898a00dec36d0"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554907 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" event={"ID":"e635b0da-956b-4636-bc9b-61f231241908","Type":"ContainerStarted","Data":"10bf0b2fa0214d3d300f54a6ad731b796e7eda2be6d3ed5948e65d2b920e7ced"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554922 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" event={"ID":"e635b0da-956b-4636-bc9b-61f231241908","Type":"ContainerStarted","Data":"c3c767d6aca988650063d67045483c4316fb23551293f63bcb6227962e14fff7"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554935 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerStarted","Data":"481a6108588ed0bc22920e61a3ef36e394b22655f3f89fa887b0a577e1e9072c"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554948 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerDied","Data":"a22b29816e03690faf00c5c6d5f7ea0b06750cd2c50fe9f666b86154f5e12d55"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554966 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" event={"ID":"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9","Type":"ContainerStarted","Data":"d3f24d18018ae4fd0cde9a9605ef8a24287eac4d74c241af3ae19429f61d0495"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554981 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"78dc543f-66ed-4098-b5a9-699ec2ccc856","Type":"ContainerDied","Data":"b72861ea5791b8527c79a3ba9ca252aad4949d7fe333b8f4afa8d681aa68f9d1"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.554998 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"78dc543f-66ed-4098-b5a9-699ec2ccc856","Type":"ContainerDied","Data":"8885706fe3eb5e1a7daf09d862d9ef81922973f55e3d7589baf732cdce1cb547"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555009 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8885706fe3eb5e1a7daf09d862d9ef81922973f55e3d7589baf732cdce1cb547" Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555020 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerStarted","Data":"f8e05400c4242a6c2f3881aef7ae629f7a73530a08ee7893c8a1994c2fbd1380"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555033 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerDied","Data":"d653a3f99cf80e74726e1b1340ca117861fb6803c0c0eb0b6d0a40207c074c3a"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555048 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" event={"ID":"a21e2296-10cb-4c70-ac3e-2173d35faac4","Type":"ContainerStarted","Data":"c9f54e610a612acd73c7eef641d4a04d687bbce1c7479f0807ca8b7e43cd718d"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555061 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"141c1c193013aba156bcafd70b058b224242057d2cf9f83ba4dd26b8100e4d3f"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555100 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"f2753c6ede26e51916276b3918863819c08fcf1e3cfeb773ba0609d9fda8556b"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555115 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"19b1636ab72d9a9b9983713d62f8565fb7c16719c6345915ce9c3d89fbded136"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555129 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"c29732d6a1771e6db51e932e553e6dcc162c74870f5049944a74cde5e36091d0"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555144 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d6b53d454870d697b9c573115c109e90d3f7b9c2856d48b483ff4f7d0df63f" Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555154 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" event={"ID":"a5afb146-31d7-4da9-8738-b6c15528233a","Type":"ContainerStarted","Data":"09f644edf932f3c7a117f699d47754e018bad866251462b4281bbbb8c5438352"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555171 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" event={"ID":"a5afb146-31d7-4da9-8738-b6c15528233a","Type":"ContainerDied","Data":"1f70617dd998f936fb35fbf67cf4dddc810c8e16cdc8c2b46a2145b980e52414"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555191 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" event={"ID":"a5afb146-31d7-4da9-8738-b6c15528233a","Type":"ContainerStarted","Data":"5d5dc92efde818d2d1a5f4cbb624b0e37be0ed6b909a72582b68ff8f3ccab573"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555205 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"57a34dbc-eb6d-44f5-b1aa-4762b69382ed","Type":"ContainerDied","Data":"11d598a821a501bbacbf414ba9cb9b4053b94492a8ef82c31d41892148ed5df2"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555220 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"57a34dbc-eb6d-44f5-b1aa-4762b69382ed","Type":"ContainerDied","Data":"acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555230 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acaa687ebf5d39190e2c2ec89078fb51a5c01299107f28308e1d34d40984afd2" Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555243 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" event={"ID":"8a7e92d4-b7ed-408b-b7cf-00209a627bea","Type":"ContainerStarted","Data":"5bd0cf5d8baf3a2aa869e1e1bdc081c235c25122fbe0ed40a05cf502e6556dd7"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555257 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" event={"ID":"8a7e92d4-b7ed-408b-b7cf-00209a627bea","Type":"ContainerStarted","Data":"3e9ee4ba2b30507c13973fee0309fba4893b4e5e93df158a36a62373121b00ef"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555269 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" event={"ID":"8a7e92d4-b7ed-408b-b7cf-00209a627bea","Type":"ContainerStarted","Data":"41f9b34125839a0766d5a064b548741e6d8afe1be3f01659bf8e4366efb2cc07"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555283 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"7a5964149940bfe02b13e1629eac187329873cf8b67f50fef511754fdef9ba33"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555303 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"937b674da18ffd00f3060b7c8bedea19980a79bcc897766e82761f716314d591"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555316 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"defdda10b4f2af3f2f0aeb50bfb3ec0613908d04158d59043799bc29da0a720e"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555470 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerDied","Data":"f35f20071c5b0df4134c3bd22227a8034ca2417ef7250451b3ec29b800fa74dc"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555497 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerDied","Data":"6db16eaa3133d25587d14c0b9e526e3d55af3b3bbd2fa785bac1c1b404fb50fd"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555511 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" event={"ID":"d063b330-4180-43de-a248-c573183d96f1","Type":"ContainerStarted","Data":"f7e80d6737a7317d9e7f0a0998357862025d52425ce316b9131469a8ee87029a"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555525 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"345ca27a-f572-4efa-b0ce-dfa8243becd6","Type":"ContainerStarted","Data":"e63666c422a16c752beb8b0b06fe877b0b08af534810c31f0c885141cf9254a6"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555547 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"345ca27a-f572-4efa-b0ce-dfa8243becd6","Type":"ContainerStarted","Data":"5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555561 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerStarted","Data":"caef745beab0d63a4013a6a6e99e9afcba1e4b4799e5753cb1368b115c97f35f"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555575 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerDied","Data":"852d729d09be57b6d61037e6fcf22117d96dfe2b5817fac91c49139db7eb714e"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555591 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" event={"ID":"7e0267ba-5dd7-4e81-885f-95b27a7b42ea","Type":"ContainerStarted","Data":"d14eb63d678bcf527293b2268e60d6e7c54629d3617ad205aa85e0b95e38c0c8"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555605 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0","Type":"ContainerDied","Data":"23ca4cac0c50a9d156ec6ed1b11f780e700b2306444f16b3646285a8a0f6b21b"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555621 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0","Type":"ContainerDied","Data":"cd2c2cc51881256bddd6550f01c7b5dafc5dd571e49b29567f752b73ae5dc26c"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555642 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2c2cc51881256bddd6550f01c7b5dafc5dd571e49b29567f752b73ae5dc26c" Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555655 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" event={"ID":"fd9abe2b-f829-4376-9abe-7da0a08770e7","Type":"ContainerStarted","Data":"a081eaa1fe28cb625de6cbd34bf82fe380f1125f6fc13709be875ffb66e10712"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555670 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" event={"ID":"fd9abe2b-f829-4376-9abe-7da0a08770e7","Type":"ContainerStarted","Data":"3d5f85e25df37bc23b86ad59b79c59dee68778a01ef1c8a85a90f6ca1894bc34"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555683 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" event={"ID":"fd9abe2b-f829-4376-9abe-7da0a08770e7","Type":"ContainerStarted","Data":"f08d60c032a49069a33366a771add75613c8b164c10de5edc94cf407f1fce2c7"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555697 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65ts8" event={"ID":"0cb21214-292a-48ee-85e2-6b1e62f40cb4","Type":"ContainerStarted","Data":"081d0802e3f974aded513159484c54517ae098c48bd0d0fb786272b12257b48b"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555710 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65ts8" event={"ID":"0cb21214-292a-48ee-85e2-6b1e62f40cb4","Type":"ContainerStarted","Data":"1cfcb83edf8c27df479212bb6c499d0187e931da1f4d2c86a1e4b18a2365e17f"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555725 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65ts8" event={"ID":"0cb21214-292a-48ee-85e2-6b1e62f40cb4","Type":"ContainerStarted","Data":"940096d4a40b7dc6434a7295ac74e546aac8e0fdcf673fbbc4587227bf159807"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555741 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"2afeed653a539a9642286d79c4ea18f7a0df39faf484b243e4c5081f2b8b2452"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555756 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerDied","Data":"b774a43655d7769bfa98aff1d64209f6f402f99c955ad8667823c36ae49e4cf7"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555770 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" event={"ID":"81f5ed55-225c-41e2-bc9d-b41063a604c9","Type":"ContainerStarted","Data":"546b6a60e0c7d74e50a429925cb5072388fd5ebf8c592233957d28ac0705b80e"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555784 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerStarted","Data":"dbfa49a582d726e5ea9983357688b4a39d457da61c0391b2dbe1b2423bd4f6ec"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555798 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerDied","Data":"bd2fcdaa2b69646a1f5d77c5acf0088cc640d06a976607ae2c22145452d4676a"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555813 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" event={"ID":"66e50eed-e3ac-431f-931b-7c4c848c491b","Type":"ContainerStarted","Data":"75ac8242dd3ac65ec334d068ab89d656dd2f236cc11b5b2166aad268d407590d"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555825 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerStarted","Data":"24b28697148b3cce0c10494ac1803deb5901b19d5b4c2913633b09d622b49222"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555855 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerDied","Data":"91654533c4587e9af46f22c13f2fb947540ddaf2d482fd744c4652dfb1a9f5a2"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555870 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" event={"ID":"6eb502a1-db10-46ba-b698-461919464fb9","Type":"ContainerStarted","Data":"f656606ac6df85fac107c39c0c27a0a282ed80a965624e99277db535c27a6047"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555883 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l8ltx" event={"ID":"385e69e4-d443-44bb-8ee4-578a1c902c62","Type":"ContainerStarted","Data":"c4dbb259e0e16bae260c7aeab514c3bce22a0a1df01d7fb94250b416bfcd06a0"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555898 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l8ltx" event={"ID":"385e69e4-d443-44bb-8ee4-578a1c902c62","Type":"ContainerStarted","Data":"3df8fc2c08893e29b5ce6cbd652644e9b2d19ac599e4617011853ce2cd739da7"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555916 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" event={"ID":"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0","Type":"ContainerStarted","Data":"948426f8a7e9fc8067b2b637e9391c90e32f58271131d74f32119f667f74e79b"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555930 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" event={"ID":"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0","Type":"ContainerStarted","Data":"5acb1dbbaadd24be1aa51015d4ffabe0583806b310c9bb173c49c064dc0af3d3"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555949 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" event={"ID":"f3fbcd83-a3e1-4de1-aceb-2692d348e495","Type":"ContainerStarted","Data":"dc257d9f0b8b7220092c839e36e620d477c42e50b90f4361868af98eec13ba42"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555964 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" event={"ID":"f3fbcd83-a3e1-4de1-aceb-2692d348e495","Type":"ContainerStarted","Data":"7209969f44f9ab5882d68093e19acf5d06b62971db17a4f1d85b7a48c8b7b602"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555979 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"ecaf1243154dde279f8eb70fb3208ec4c39a8e7c7a27d9a0976f08303916202f"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.555993 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"9c46876fc3ed9e88b423e5e3303487fe77ad4ea83416a3a3950db6e6ac947b05"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556006 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"8ff1aa9be63274968b15bcf0a7c20df9e9315bcb35a3d281e9aba68b98723c76"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556020 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" event={"ID":"c3af41e9-c604-48da-bec5-df81c2ef3374","Type":"ContainerStarted","Data":"128b0bbce1167507413481adcf0cd96d93f47d1c9ffde9e41a211956e1a927c9"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556038 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" event={"ID":"d589bfbb-3a7d-4617-9770-5c9ef737cb4a","Type":"ContainerStarted","Data":"43a9d4a149475717fa1ef3d37fbaab396886033829072b529898dcdefcf58e78"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556051 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" event={"ID":"d589bfbb-3a7d-4617-9770-5c9ef737cb4a","Type":"ContainerStarted","Data":"da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556064 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerStarted","Data":"782960243c6236dea1d6c183e9bbe6b8287c5031207274b6535b2bb6c1a52e4d"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556096 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerDied","Data":"5af2147c5b6156b079ec16c643f5bc1c46f463b8da9a0f84030507704a3988c2"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556112 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" event={"ID":"d4d01185-e485-4697-92c2-31a044f25d82","Type":"ContainerStarted","Data":"b606b54eb942579ee14be5af80441dce4b4a9b6234020bb3e61d0131e1fde21b"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556126 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerStarted","Data":"20e77c441ee0dc697e66d86d013ee46d26feb16aaeeb7f34f104d5c3fdb5ce81"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556140 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerDied","Data":"41b89fabe8bcfa93d37c680741df23c997dd23bfef1e93509706508b89ba3e17"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.556165 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" event={"ID":"4382d186-34e4-40af-9b92-bb17ddcaa23f","Type":"ContainerStarted","Data":"39ad18e2cdc22131103d7ee2686ffb12580bbefadb50c1a1863e06df883204d5"} Mar 08 22:13:52.565512 master-0 kubenswrapper[29458]: I0308 22:13:52.564929 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 22:13:52.574598 master-0 kubenswrapper[29458]: I0308 22:13:52.574524 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 22:13:52.575010 master-0 kubenswrapper[29458]: I0308 22:13:52.574938 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 22:13:52.577707 master-0 kubenswrapper[29458]: I0308 22:13:52.577651 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 22:13:52.578971 master-0 kubenswrapper[29458]: I0308 22:13:52.578933 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.579038 master-0 kubenswrapper[29458]: I0308 22:13:52.579002 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:52.579115 master-0 kubenswrapper[29458]: I0308 22:13:52.579053 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-catalog-content\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:13:52.579411 master-0 kubenswrapper[29458]: I0308 22:13:52.579384 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d0641333-feda-44c5-baf5-ceee4ce3fd8f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:52.579507 master-0 kubenswrapper[29458]: I0308 22:13:52.579464 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-service-ca\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.579602 master-0 kubenswrapper[29458]: I0308 22:13:52.579503 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.579602 master-0 kubenswrapper[29458]: I0308 22:13:52.579536 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.579602 master-0 kubenswrapper[29458]: I0308 22:13:52.579548 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-catalog-content\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:13:52.579602 master-0 kubenswrapper[29458]: I0308 22:13:52.579569 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 22:13:52.579767 master-0 kubenswrapper[29458]: I0308 22:13:52.579635 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.579767 master-0 kubenswrapper[29458]: I0308 22:13:52.579700 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.579849 master-0 kubenswrapper[29458]: I0308 22:13:52.579740 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 22:13:52.579905 master-0 kubenswrapper[29458]: I0308 22:13:52.579829 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:52.579905 master-0 kubenswrapper[29458]: I0308 22:13:52.579897 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:52.579991 master-0 kubenswrapper[29458]: I0308 22:13:52.579963 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:52.580041 master-0 kubenswrapper[29458]: I0308 22:13:52.579996 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.580138 master-0 kubenswrapper[29458]: I0308 22:13:52.580053 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-serving-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.580191 master-0 kubenswrapper[29458]: I0308 22:13:52.580141 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 22:13:52.580238 master-0 kubenswrapper[29458]: I0308 22:13:52.580213 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:13:52.580311 master-0 kubenswrapper[29458]: I0308 22:13:52.580285 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.580377 master-0 kubenswrapper[29458]: I0308 22:13:52.580318 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-sys\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.580459 master-0 kubenswrapper[29458]: I0308 22:13:52.580405 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:13:52.580499 master-0 kubenswrapper[29458]: I0308 22:13:52.580460 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-config\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.580499 master-0 kubenswrapper[29458]: I0308 22:13:52.580482 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.580583 master-0 kubenswrapper[29458]: I0308 22:13:52.580553 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:13:52.580739 master-0 kubenswrapper[29458]: I0308 22:13:52.580694 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-config\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 22:13:52.580828 master-0 kubenswrapper[29458]: I0308 22:13:52.580783 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-daemon-config\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.580914 master-0 kubenswrapper[29458]: I0308 22:13:52.580842 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 22:13:52.580952 master-0 kubenswrapper[29458]: I0308 22:13:52.580927 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:52.581053 master-0 kubenswrapper[29458]: I0308 22:13:52.580987 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.581166 master-0 kubenswrapper[29458]: I0308 22:13:52.581127 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.581243 master-0 kubenswrapper[29458]: I0308 22:13:52.581176 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-encryption-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.581243 master-0 kubenswrapper[29458]: I0308 22:13:52.581218 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmfqq\" (UniqueName: \"kubernetes.io/projected/c901b468-b8e9-48f8-8050-0d54e24e2adb-kube-api-access-hmfqq\") pod \"csi-snapshot-controller-7577d6f48-wklhr\" (UID: \"c901b468-b8e9-48f8-8050-0d54e24e2adb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 22:13:52.581371 master-0 kubenswrapper[29458]: I0308 22:13:52.581337 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.581423 master-0 kubenswrapper[29458]: I0308 22:13:52.581387 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.581469 master-0 kubenswrapper[29458]: I0308 22:13:52.581454 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.581551 master-0 kubenswrapper[29458]: I0308 22:13:52.581437 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/de89c423-0f2a-440f-9fa9-92fefea84b09-operand-assets\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 22:13:52.581598 master-0 kubenswrapper[29458]: I0308 22:13:52.581520 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 22:13:52.581740 master-0 kubenswrapper[29458]: I0308 22:13:52.581719 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:13:52.581875 master-0 kubenswrapper[29458]: I0308 22:13:52.581821 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 22:13:52.581912 master-0 kubenswrapper[29458]: I0308 22:13:52.581894 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 22:13:52.581964 master-0 kubenswrapper[29458]: I0308 22:13:52.581945 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:52.582061 master-0 kubenswrapper[29458]: I0308 22:13:52.582038 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-serving-cert\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.582120 master-0 kubenswrapper[29458]: I0308 22:13:52.582053 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:52.582247 master-0 kubenswrapper[29458]: I0308 22:13:52.582155 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.582310 master-0 kubenswrapper[29458]: I0308 22:13:52.582291 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:52.582407 master-0 kubenswrapper[29458]: I0308 22:13:52.582382 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5t9m\" (UniqueName: \"kubernetes.io/projected/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-kube-api-access-w5t9m\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:52.582485 master-0 kubenswrapper[29458]: I0308 22:13:52.582461 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.582517 master-0 kubenswrapper[29458]: I0308 22:13:52.582501 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kz92\" (UniqueName: \"kubernetes.io/projected/81f5ed55-225c-41e2-bc9d-b41063a604c9-kube-api-access-7kz92\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:52.582580 master-0 kubenswrapper[29458]: I0308 22:13:52.582562 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-node-pullsecrets\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.582875 master-0 kubenswrapper[29458]: I0308 22:13:52.582820 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrqj\" (UniqueName: \"kubernetes.io/projected/66e50eed-e3ac-431f-931b-7c4c848c491b-kube-api-access-bjrqj\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:52.582925 master-0 kubenswrapper[29458]: E0308 22:13:52.582858 29458 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 08 22:13:52.582961 master-0 kubenswrapper[29458]: I0308 22:13:52.582922 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:52.583018 master-0 kubenswrapper[29458]: I0308 22:13:52.582984 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.583133 master-0 kubenswrapper[29458]: I0308 22:13:52.583090 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.583310 master-0 kubenswrapper[29458]: I0308 22:13:52.583283 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fbc12f-3c27-4a7a-933f-43a55c960335-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 22:13:52.583455 master-0 kubenswrapper[29458]: I0308 22:13:52.583423 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 22:13:52.583512 master-0 kubenswrapper[29458]: I0308 22:13:52.583489 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:13:52.583867 master-0 kubenswrapper[29458]: I0308 22:13:52.583776 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0641333-feda-44c5-baf5-ceee4ce3fd8f-serving-cert\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:52.583961 master-0 kubenswrapper[29458]: I0308 22:13:52.583929 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-hosts-file\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 22:13:52.584141 master-0 kubenswrapper[29458]: I0308 22:13:52.584087 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.584226 master-0 kubenswrapper[29458]: I0308 22:13:52.584151 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-config\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.584226 master-0 kubenswrapper[29458]: I0308 22:13:52.584133 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.584351 master-0 kubenswrapper[29458]: I0308 22:13:52.584318 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-host\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.584511 master-0 kubenswrapper[29458]: I0308 22:13:52.584467 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 22:13:52.584607 master-0 kubenswrapper[29458]: I0308 22:13:52.584522 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-serving-cert\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 22:13:52.584648 master-0 kubenswrapper[29458]: I0308 22:13:52.584449 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.584688 master-0 kubenswrapper[29458]: I0308 22:13:52.584663 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.584729 master-0 kubenswrapper[29458]: I0308 22:13:52.584705 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.584789 master-0 kubenswrapper[29458]: I0308 22:13:52.584766 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb21214-292a-48ee-85e2-6b1e62f40cb4-config-volume\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:52.584829 master-0 kubenswrapper[29458]: I0308 22:13:52.584812 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:52.584892 master-0 kubenswrapper[29458]: I0308 22:13:52.584864 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.584939 master-0 kubenswrapper[29458]: I0308 22:13:52.584926 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.584979 master-0 kubenswrapper[29458]: I0308 22:13:52.584966 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:52.585301 master-0 kubenswrapper[29458]: I0308 22:13:52.585246 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 22:13:52.585426 master-0 kubenswrapper[29458]: I0308 22:13:52.585400 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.585485 master-0 kubenswrapper[29458]: I0308 22:13:52.585440 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 22:13:52.585534 master-0 kubenswrapper[29458]: I0308 22:13:52.585478 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.585673 master-0 kubenswrapper[29458]: I0308 22:13:52.585632 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.585761 master-0 kubenswrapper[29458]: I0308 22:13:52.585705 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 22:13:52.585810 master-0 kubenswrapper[29458]: I0308 22:13:52.585770 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-utilities\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:13:52.585984 master-0 kubenswrapper[29458]: I0308 22:13:52.585955 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89619d97-2c16-4e76-ba80-8b519f6a9366-utilities\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:13:52.586043 master-0 kubenswrapper[29458]: I0308 22:13:52.586014 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.586110 master-0 kubenswrapper[29458]: I0308 22:13:52.585918 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-config\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.586110 master-0 kubenswrapper[29458]: I0308 22:13:52.586104 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.586212 master-0 kubenswrapper[29458]: I0308 22:13:52.586143 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-audit-policies\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.586212 master-0 kubenswrapper[29458]: I0308 22:13:52.586135 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d01185-e485-4697-92c2-31a044f25d82-config\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 22:13:52.586212 master-0 kubenswrapper[29458]: I0308 22:13:52.586179 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.586212 master-0 kubenswrapper[29458]: I0308 22:13:52.586216 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586165 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df48e7e0-0659-48e2-9b6a-32c964ff47b2-metrics-tls\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586222 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4ef806a4-5486-43a9-8bfa-b1670c888dc1-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586445 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-cabundle\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586508 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586598 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfe625a1-5ba4-491f-9ab3-5d91154961a0-webhook-cert\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586559 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:52.586843 master-0 kubenswrapper[29458]: I0308 22:13:52.586665 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d01185-e485-4697-92c2-31a044f25d82-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 22:13:52.588391 master-0 kubenswrapper[29458]: I0308 22:13:52.588321 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.588491 master-0 kubenswrapper[29458]: I0308 22:13:52.588446 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.588581 master-0 kubenswrapper[29458]: I0308 22:13:52.588541 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.588632 master-0 kubenswrapper[29458]: I0308 22:13:52.588600 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2851c096-f5cb-4a46-a5a0-ac0b1341033b-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.588663 master-0 kubenswrapper[29458]: I0308 22:13:52.588627 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.588739 master-0 kubenswrapper[29458]: I0308 22:13:52.588700 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 22:13:52.588827 master-0 kubenswrapper[29458]: I0308 22:13:52.588788 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 22:13:52.588905 master-0 kubenswrapper[29458]: I0308 22:13:52.588871 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.588975 master-0 kubenswrapper[29458]: I0308 22:13:52.588948 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-kubernetes\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.589142 master-0 kubenswrapper[29458]: I0308 22:13:52.589106 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-conf\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.589226 master-0 kubenswrapper[29458]: I0308 22:13:52.589193 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:52.589273 master-0 kubenswrapper[29458]: I0308 22:13:52.589254 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 22:13:52.589315 master-0 kubenswrapper[29458]: I0308 22:13:52.589296 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:52.589380 master-0 kubenswrapper[29458]: I0308 22:13:52.589335 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.589432 master-0 kubenswrapper[29458]: I0308 22:13:52.589407 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:52.589472 master-0 kubenswrapper[29458]: I0308 22:13:52.589456 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.589509 master-0 kubenswrapper[29458]: I0308 22:13:52.589495 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.589557 master-0 kubenswrapper[29458]: I0308 22:13:52.589534 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 22:13:52.589597 master-0 kubenswrapper[29458]: I0308 22:13:52.589548 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:13:52.589597 master-0 kubenswrapper[29458]: I0308 22:13:52.589569 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de89c423-0f2a-440f-9fa9-92fefea84b09-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 22:13:52.589597 master-0 kubenswrapper[29458]: I0308 22:13:52.589569 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.589707 master-0 kubenswrapper[29458]: I0308 22:13:52.589577 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 22:13:52.589707 master-0 kubenswrapper[29458]: I0308 22:13:52.589636 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:52.589707 master-0 kubenswrapper[29458]: I0308 22:13:52.589654 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-metrics-tls\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:52.589707 master-0 kubenswrapper[29458]: I0308 22:13:52.589676 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkp4\" (UniqueName: \"kubernetes.io/projected/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-kube-api-access-dqkp4\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:52.589864 master-0 kubenswrapper[29458]: I0308 22:13:52.589714 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-stats-auth\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:52.589864 master-0 kubenswrapper[29458]: I0308 22:13:52.589751 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2lv\" (UniqueName: \"kubernetes.io/projected/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-kube-api-access-jb2lv\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 22:13:52.589864 master-0 kubenswrapper[29458]: I0308 22:13:52.589788 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:52.589864 master-0 kubenswrapper[29458]: I0308 22:13:52.589830 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 22:13:52.589864 master-0 kubenswrapper[29458]: I0308 22:13:52.589846 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.590055 master-0 kubenswrapper[29458]: I0308 22:13:52.589940 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.590177 master-0 kubenswrapper[29458]: I0308 22:13:52.590144 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.590229 master-0 kubenswrapper[29458]: I0308 22:13:52.590198 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:52.590271 master-0 kubenswrapper[29458]: I0308 22:13:52.590232 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp26r\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-kube-api-access-mp26r\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.590313 master-0 kubenswrapper[29458]: I0308 22:13:52.590270 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-image-import-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.590363 master-0 kubenswrapper[29458]: I0308 22:13:52.590319 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.590405 master-0 kubenswrapper[29458]: I0308 22:13:52.590375 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:13:52.590405 master-0 kubenswrapper[29458]: I0308 22:13:52.590401 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:52.590477 master-0 kubenswrapper[29458]: I0308 22:13:52.590447 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.590519 master-0 kubenswrapper[29458]: I0308 22:13:52.590491 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.590565 master-0 kubenswrapper[29458]: I0308 22:13:52.590540 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:13:52.590644 master-0 kubenswrapper[29458]: I0308 22:13:52.590569 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:13:52.590644 master-0 kubenswrapper[29458]: I0308 22:13:52.590632 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-key\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 22:13:52.590726 master-0 kubenswrapper[29458]: I0308 22:13:52.590679 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:52.590726 master-0 kubenswrapper[29458]: I0308 22:13:52.590706 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmk7\" (UniqueName: \"kubernetes.io/projected/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-kube-api-access-nvmk7\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:52.590815 master-0 kubenswrapper[29458]: I0308 22:13:52.590728 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.590815 master-0 kubenswrapper[29458]: I0308 22:13:52.590775 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-serving-cert\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.590815 master-0 kubenswrapper[29458]: I0308 22:13:52.590798 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:52.590921 master-0 kubenswrapper[29458]: I0308 22:13:52.590847 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.590921 master-0 kubenswrapper[29458]: I0308 22:13:52.590872 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.590984 master-0 kubenswrapper[29458]: I0308 22:13:52.590937 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:52.591013 master-0 kubenswrapper[29458]: I0308 22:13:52.590981 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 22:13:52.591040 master-0 kubenswrapper[29458]: I0308 22:13:52.591025 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.591140 master-0 kubenswrapper[29458]: I0308 22:13:52.591099 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.591140 master-0 kubenswrapper[29458]: I0308 22:13:52.591132 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2dp\" (UniqueName: \"kubernetes.io/projected/0cb21214-292a-48ee-85e2-6b1e62f40cb4-kube-api-access-sg2dp\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:52.591227 master-0 kubenswrapper[29458]: I0308 22:13:52.591159 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/077643a2-ab2d-4f12-9abf-42a1c56d7aff-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.591306 master-0 kubenswrapper[29458]: I0308 22:13:52.591275 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.591306 master-0 kubenswrapper[29458]: I0308 22:13:52.591288 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/077643a2-ab2d-4f12-9abf-42a1c56d7aff-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.591382 master-0 kubenswrapper[29458]: I0308 22:13:52.591276 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-config\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 22:13:52.591382 master-0 kubenswrapper[29458]: I0308 22:13:52.591325 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-serving-ca\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.591382 master-0 kubenswrapper[29458]: I0308 22:13:52.591361 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.591502 master-0 kubenswrapper[29458]: I0308 22:13:52.591391 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 22:13:52.591502 master-0 kubenswrapper[29458]: I0308 22:13:52.591408 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.591502 master-0 kubenswrapper[29458]: I0308 22:13:52.591421 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.591502 master-0 kubenswrapper[29458]: I0308 22:13:52.591449 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ht4t\" (UniqueName: \"kubernetes.io/projected/e8ef68b9-6f8d-4697-b269-91ee4e310752-kube-api-access-6ht4t\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 22:13:52.591502 master-0 kubenswrapper[29458]: I0308 22:13:52.591475 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 22:13:52.591502 master-0 kubenswrapper[29458]: I0308 22:13:52.591498 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591525 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-run\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591549 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591577 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-tmp\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591626 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591667 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591675 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-trusted-ca\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:52.591721 master-0 kubenswrapper[29458]: I0308 22:13:52.591697 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b849f992-1020-4633-98be-75705b962fa9-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591669 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-tmp\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591707 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/83b5f0b6-adee-4820-8212-b4d182b178d2-srv-cert\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591700 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvp5b\" (UniqueName: \"kubernetes.io/projected/a5afb146-31d7-4da9-8738-b6c15528233a-kube-api-access-mvp5b\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591792 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/081acedd-4c88-461f-80f3-e80fdbadb725-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591824 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591864 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit-dir\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591896 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-lib-modules\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591928 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.591983 master-0 kubenswrapper[29458]: I0308 22:13:52.591976 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-tuned\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592016 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592044 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592086 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592187 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-ca\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592230 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-tuned\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592307 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpb8q\" (UniqueName: \"kubernetes.io/projected/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-kube-api-access-lpb8q\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592324 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-env-overrides\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.592357 master-0 kubenswrapper[29458]: I0308 22:13:52.592345 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.592565 master-0 kubenswrapper[29458]: I0308 22:13:52.592374 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.592565 master-0 kubenswrapper[29458]: I0308 22:13:52.592404 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:52.592565 master-0 kubenswrapper[29458]: I0308 22:13:52.592524 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 22:13:52.592651 master-0 kubenswrapper[29458]: I0308 22:13:52.592566 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-var-lib-kubelet\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.592651 master-0 kubenswrapper[29458]: I0308 22:13:52.592596 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.592651 master-0 kubenswrapper[29458]: I0308 22:13:52.592601 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-ovnkube-identity-cm\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.592651 master-0 kubenswrapper[29458]: I0308 22:13:52.592627 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5afb146-31d7-4da9-8738-b6c15528233a-audit-dir\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.592763 master-0 kubenswrapper[29458]: I0308 22:13:52.592661 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:52.592763 master-0 kubenswrapper[29458]: I0308 22:13:52.592683 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a21e2296-10cb-4c70-ac3e-2173d35faac4-metrics-tls\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:52.592763 master-0 kubenswrapper[29458]: I0308 22:13:52.592685 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.592763 master-0 kubenswrapper[29458]: I0308 22:13:52.592730 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.592966 master-0 kubenswrapper[29458]: I0308 22:13:52.592764 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.592966 master-0 kubenswrapper[29458]: I0308 22:13:52.592802 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.592966 master-0 kubenswrapper[29458]: I0308 22:13:52.592836 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.592966 master-0 kubenswrapper[29458]: I0308 22:13:52.592884 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:52.592966 master-0 kubenswrapper[29458]: I0308 22:13:52.592953 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.593147 master-0 kubenswrapper[29458]: I0308 22:13:52.593008 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.593147 master-0 kubenswrapper[29458]: I0308 22:13:52.593043 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.593147 master-0 kubenswrapper[29458]: I0308 22:13:52.593112 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpt7\" (UniqueName: \"kubernetes.io/projected/0d0feb73-2ef6-4083-81ce-82a1394ce9c4-kube-api-access-jfpt7\") pod \"migrator-57ccdf9b5-bf6ws\" (UID: \"0d0feb73-2ef6-4083-81ce-82a1394ce9c4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 22:13:52.593147 master-0 kubenswrapper[29458]: I0308 22:13:52.593142 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.593294 master-0 kubenswrapper[29458]: I0308 22:13:52.593205 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:52.593294 master-0 kubenswrapper[29458]: I0308 22:13:52.593237 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.593374 master-0 kubenswrapper[29458]: I0308 22:13:52.593255 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.593374 master-0 kubenswrapper[29458]: I0308 22:13:52.593294 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593355 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/66e50eed-e3ac-431f-931b-7c4c848c491b-snapshots\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593474 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593503 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/66e50eed-e3ac-431f-931b-7c4c848c491b-snapshots\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593508 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-utilities\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593543 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593571 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-client\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593585 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/081acedd-4c88-461f-80f3-e80fdbadb725-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593596 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-encryption-config\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593633 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593660 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593680 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-systemd\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593585 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-utilities\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593702 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlqz\" (UniqueName: \"kubernetes.io/projected/6eb502a1-db10-46ba-b698-461919464fb9-kube-api-access-sjlqz\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593745 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593775 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqrj\" (UniqueName: \"kubernetes.io/projected/d9e9c931-9595-42f1-bbc2-c412286f6cd1-kube-api-access-znqrj\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:52.593836 master-0 kubenswrapper[29458]: I0308 22:13:52.593837 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-metrics-certs\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.593870 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.593873 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/971ffa86-4d52-4dc3-ba28-03d116ec3494-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.593925 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594028 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594060 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594095 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594110 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a913c639-ebfc-42a3-85cd-8a460027d3ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594119 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594148 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594167 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594188 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e00c74-fb72-4e3d-a22c-c38a4772a813-config\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594221 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594226 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594272 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594310 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594229 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e00c74-fb72-4e3d-a22c-c38a4772a813-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594331 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e67e41-045e-42ef-8f60-6ef15606d6a2-metrics-certs\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594346 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594387 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594417 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594426 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4382d186-34e4-40af-9b92-bb17ddcaa23f-etcd-client\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594494 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e635b0da-956b-4636-bc9b-61f231241908-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kx9pl\" (UID: \"e635b0da-956b-4636-bc9b-61f231241908\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594516 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ef806a4-5486-43a9-8bfa-b1670c888dc1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594498 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2851c096-f5cb-4a46-a5a0-ac0b1341033b-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594516 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-serving-cert\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594567 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594596 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594655 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-modprobe-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594685 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594711 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-serving-cert\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594739 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-trusted-ca-bundle\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594768 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594787 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594798 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594834 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594882 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594907 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-catalog-content\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594926 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.594961 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595028 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595039 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-catalog-content\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595054 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-default-certificate\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595097 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fbc12f-3c27-4a7a-933f-43a55c960335-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595111 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595135 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595155 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595198 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595218 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595266 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwf9\" (UniqueName: \"kubernetes.io/projected/f3fbcd83-a3e1-4de1-aceb-2692d348e495-kube-api-access-5jwf9\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595290 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595315 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595317 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-srv-cert\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595399 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b849f992-1020-4633-98be-75705b962fa9-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595438 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595504 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595583 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.595576 master-0 kubenswrapper[29458]: I0308 22:13:52.595610 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595654 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxssr\" (UniqueName: \"kubernetes.io/projected/fd9abe2b-f829-4376-9abe-7da0a08770e7-kube-api-access-vxssr\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595768 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/385e69e4-d443-44bb-8ee4-578a1c902c62-cni-binary-copy\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595781 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595827 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595882 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595920 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-serving-cert\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595948 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.595977 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596005 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596039 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596068 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-trusted-ca-bundle\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596129 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-client\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596229 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596260 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596323 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971ffa86-4d52-4dc3-ba28-03d116ec3494-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596456 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596492 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysconfig\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596492 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-serving-cert\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596519 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596547 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596574 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596625 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596664 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596701 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftn6p\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-kube-api-access-ftn6p\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596728 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596757 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjndf\" (UniqueName: \"kubernetes.io/projected/10e2e81b-cd18-4e30-b8ad-4cf105daea4a-kube-api-access-sjndf\") pod \"network-check-source-7c67b67d47-qf2dp\" (UID: \"10e2e81b-cd18-4e30-b8ad-4cf105daea4a\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596786 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596815 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f5ed55-225c-41e2-bc9d-b41063a604c9-service-ca-bundle\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596924 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.596978 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.597020 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/be431b74-1116-4b0f-8b25-bbb0408411b0-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.597136 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfe625a1-5ba4-491f-9ab3-5d91154961a0-env-overrides\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.597201 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.597212 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a913c639-ebfc-42a3-85cd-8a460027d3ec-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:52.598023 master-0 kubenswrapper[29458]: I0308 22:13:52.597537 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/96a67acb-9cc6-4793-b99a-01479b239d76-whereabouts-configmap\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.601174 master-0 kubenswrapper[29458]: I0308 22:13:52.601132 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 22:13:52.603868 master-0 kubenswrapper[29458]: I0308 22:13:52.603835 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-key\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 22:13:52.610130 master-0 kubenswrapper[29458]: E0308 22:13:52.610060 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.620178 master-0 kubenswrapper[29458]: I0308 22:13:52.620124 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 22:13:52.622745 master-0 kubenswrapper[29458]: I0308 22:13:52.622699 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-serving-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.635650 master-0 kubenswrapper[29458]: I0308 22:13:52.635529 29458 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 08 22:13:52.641295 master-0 kubenswrapper[29458]: I0308 22:13:52.641228 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 22:13:52.665012 master-0 kubenswrapper[29458]: I0308 22:13:52.664949 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 22:13:52.669543 master-0 kubenswrapper[29458]: I0308 22:13:52.669475 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8ef68b9-6f8d-4697-b269-91ee4e310752-signing-cabundle\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 22:13:52.680243 master-0 kubenswrapper[29458]: I0308 22:13:52.680197 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 22:13:52.680603 master-0 kubenswrapper[29458]: I0308 22:13:52.680569 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-image-import-ca\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.698694 master-0 kubenswrapper[29458]: I0308 22:13:52.698610 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d063b330-4180-43de-a248-c573183d96f1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.698694 master-0 kubenswrapper[29458]: I0308 22:13:52.698677 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.698694 master-0 kubenswrapper[29458]: I0308 22:13:52.698697 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698731 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698764 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-utilities\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698779 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-utilities\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698797 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698818 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698839 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698859 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52wj\" (UniqueName: \"kubernetes.io/projected/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-kube-api-access-c52wj\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698877 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2lsl\" (UniqueName: \"kubernetes.io/projected/b1207b6b-0517-46eb-9953-737f2bf1040d-kube-api-access-d2lsl\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698903 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698942 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.698981 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.699003 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.699127 master-0 kubenswrapper[29458]: I0308 22:13:52.699121 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-utilities\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699320 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699326 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-system-cni-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699420 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699431 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-node-log\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699465 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699553 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-utilities\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:13:52.699583 master-0 kubenswrapper[29458]: I0308 22:13:52.699515 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-os-release\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.699909 master-0 kubenswrapper[29458]: I0308 22:13:52.699619 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.699909 master-0 kubenswrapper[29458]: I0308 22:13:52.699685 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:13:52.699909 master-0 kubenswrapper[29458]: I0308 22:13:52.699732 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.699909 master-0 kubenswrapper[29458]: I0308 22:13:52.699775 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-conf-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.700062 master-0 kubenswrapper[29458]: I0308 22:13:52.699912 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.700062 master-0 kubenswrapper[29458]: I0308 22:13:52.699945 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfdv\" (UniqueName: \"kubernetes.io/projected/1ef14467-bb62-462d-9dec-dee43e4cc9bd-kube-api-access-6tfdv\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:52.700062 master-0 kubenswrapper[29458]: I0308 22:13:52.699968 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-conf\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.700062 master-0 kubenswrapper[29458]: I0308 22:13:52.700004 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.700244 master-0 kubenswrapper[29458]: I0308 22:13:52.700123 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-kubernetes\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.700244 master-0 kubenswrapper[29458]: I0308 22:13:52.700158 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:52.700244 master-0 kubenswrapper[29458]: I0308 22:13:52.700186 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fp4g\" (UniqueName: \"kubernetes.io/projected/0269ed52-a753-49aa-9c38-c7aee23cebbd-kube-api-access-8fp4g\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.700244 master-0 kubenswrapper[29458]: I0308 22:13:52.700192 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.700244 master-0 kubenswrapper[29458]: I0308 22:13:52.700208 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.700440 master-0 kubenswrapper[29458]: I0308 22:13:52.700249 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:52.700440 master-0 kubenswrapper[29458]: I0308 22:13:52.700377 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-kubernetes\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.700519 master-0 kubenswrapper[29458]: I0308 22:13:52.700444 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.700519 master-0 kubenswrapper[29458]: I0308 22:13:52.700478 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:52.700606 master-0 kubenswrapper[29458]: I0308 22:13:52.700525 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysctl-conf\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.700606 master-0 kubenswrapper[29458]: I0308 22:13:52.700546 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.700606 master-0 kubenswrapper[29458]: I0308 22:13:52.700594 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:52.700714 master-0 kubenswrapper[29458]: I0308 22:13:52.700633 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-multus\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.700714 master-0 kubenswrapper[29458]: I0308 22:13:52.700671 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-cni-bin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.700794 master-0 kubenswrapper[29458]: I0308 22:13:52.700710 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:52.700794 master-0 kubenswrapper[29458]: I0308 22:13:52.700749 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdtk\" (UniqueName: \"kubernetes.io/projected/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-kube-api-access-shdtk\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:52.700794 master-0 kubenswrapper[29458]: I0308 22:13:52.700780 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.701018 master-0 kubenswrapper[29458]: I0308 22:13:52.700979 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.701018 master-0 kubenswrapper[29458]: I0308 22:13:52.700981 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.701137 master-0 kubenswrapper[29458]: I0308 22:13:52.701018 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.701137 master-0 kubenswrapper[29458]: I0308 22:13:52.701036 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.701137 master-0 kubenswrapper[29458]: I0308 22:13:52.701106 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:52.701137 master-0 kubenswrapper[29458]: I0308 22:13:52.701136 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxvr\" (UniqueName: \"kubernetes.io/projected/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-kube-api-access-gxxvr\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:52.701289 master-0 kubenswrapper[29458]: I0308 22:13:52.701145 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.701289 master-0 kubenswrapper[29458]: I0308 22:13:52.701166 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.701289 master-0 kubenswrapper[29458]: I0308 22:13:52.701203 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:52.701289 master-0 kubenswrapper[29458]: I0308 22:13:52.701237 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:52.701289 master-0 kubenswrapper[29458]: I0308 22:13:52.701278 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.701472 master-0 kubenswrapper[29458]: I0308 22:13:52.701330 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.701472 master-0 kubenswrapper[29458]: I0308 22:13:52.701425 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.701472 master-0 kubenswrapper[29458]: I0308 22:13:52.701455 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.701575 master-0 kubenswrapper[29458]: I0308 22:13:52.701481 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-hostroot\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.701575 master-0 kubenswrapper[29458]: I0308 22:13:52.701495 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.701575 master-0 kubenswrapper[29458]: I0308 22:13:52.701527 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-log-socket\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.701575 master-0 kubenswrapper[29458]: I0308 22:13:52.701556 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.701708 master-0 kubenswrapper[29458]: I0308 22:13:52.701585 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.701708 master-0 kubenswrapper[29458]: I0308 22:13:52.701613 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.701708 master-0 kubenswrapper[29458]: I0308 22:13:52.701650 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:52.701708 master-0 kubenswrapper[29458]: I0308 22:13:52.701676 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-etc-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.701708 master-0 kubenswrapper[29458]: I0308 22:13:52.701692 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701652 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701731 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-netd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701779 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701826 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4s4\" (UniqueName: \"kubernetes.io/projected/c377685c-2024-4ef7-932d-5858eeb0d9bd-kube-api-access-4z4s4\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701851 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701890 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701912 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-run\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701934 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701972 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702016 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67bc\" (UniqueName: \"kubernetes.io/projected/4eec590b-c536-4b16-a664-81bc3c74eef5-kube-api-access-k67bc\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702054 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit-dir\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702106 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-lib-modules\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702134 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp8w\" (UniqueName: \"kubernetes.io/projected/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-kube-api-access-lhp8w\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702195 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp86m\" (UniqueName: \"kubernetes.io/projected/3e38e989-41b8-4c80-99fb-8d414dda5da1-kube-api-access-jp86m\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702226 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702252 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702289 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v2k8\" (UniqueName: \"kubernetes.io/projected/d063b330-4180-43de-a248-c573183d96f1-kube-api-access-8v2k8\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702318 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-root\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702353 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-var-lib-kubelet\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.701878 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-cnibin\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702383 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702411 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit-dir\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702412 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5afb146-31d7-4da9-8738-b6c15528233a-audit-dir\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702500 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702529 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7xb\" (UniqueName: \"kubernetes.io/projected/4b5246dc-b715-4678-a3a9-878df57dd236-kube-api-access-hq7xb\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702582 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-lib-modules\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702592 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702681 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702723 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.702879 master-0 kubenswrapper[29458]: I0308 22:13:52.702898 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.702444 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5afb146-31d7-4da9-8738-b6c15528233a-audit-dir\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.702935 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-netns\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.702960 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703164 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-var-lib-kubelet\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703314 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-run\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703436 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703479 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-system-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703502 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703535 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-kubelet\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703575 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdz7m\" (UniqueName: \"kubernetes.io/projected/8a7e92d4-b7ed-408b-b7cf-00209a627bea-kube-api-access-qdz7m\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703578 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703681 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-tmpfs\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703723 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703748 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703779 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703802 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703802 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703833 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-tmpfs\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703854 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a21e2296-10cb-4c70-ac3e-2173d35faac4-host-etc-kube\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.703910 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-cnibin\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.704239 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.704238 master-0 kubenswrapper[29458]: I0308 22:13:52.704275 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704304 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704350 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-rootfs\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704377 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-wtmp\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704404 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704452 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-catalog-content\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704541 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-systemd\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704567 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207b6b-0517-46eb-9953-737f2bf1040d-catalog-content\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704572 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704616 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704644 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704645 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-systemd\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704832 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704890 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704895 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-netns\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.704943 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705023 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705042 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705149 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705177 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705236 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705387 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705423 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-etc-kubernetes\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705530 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-modprobe-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705398 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-socket-dir-parent\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705560 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.705601 master-0 kubenswrapper[29458]: I0308 22:13:52.705618 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705639 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705679 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-modprobe-d\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705679 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-catalog-content\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705712 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-var-lib-kubelet\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705742 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705767 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705807 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705836 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705852 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eec590b-c536-4b16-a664-81bc3c74eef5-catalog-content\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705862 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705891 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705939 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705969 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.705982 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706000 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706014 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706062 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96a67acb-9cc6-4793-b99a-01479b239d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706123 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706167 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706207 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mrp\" (UniqueName: \"kubernetes.io/projected/00db426a-15d4-4737-a85e-b4cf6362c759-kube-api-access-86mrp\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706365 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706404 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-ovn\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.706508 master-0 kubenswrapper[29458]: I0308 22:13:52.706437 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706547 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706602 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706708 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-sys\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706729 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706748 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysconfig\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706768 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706787 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706809 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706829 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706858 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706882 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706902 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706922 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c3af41e9-c604-48da-bec5-df81c2ef3374-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706942 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.706965 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.707045 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.707057 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2ch\" (UniqueName: \"kubernetes.io/projected/ecb3134a-ff4f-4069-8817-010b400296f6-kube-api-access-pq2ch\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.707265 master-0 kubenswrapper[29458]: I0308 22:13:52.707175 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707289 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707353 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707357 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-multus-cni-dir\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707389 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-slash\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707414 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707473 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707508 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707678 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707686 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-cni-bin\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707714 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707831 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707852 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-multus-certs\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.707972 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708041 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708114 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708166 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708185 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708217 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-sys\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708238 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-systemd-units\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708268 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708301 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-etc-sysconfig\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708340 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-sys\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708562 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c3af41e9-c604-48da-bec5-df81c2ef3374-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708619 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-textfile\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708664 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.708686 master-0 kubenswrapper[29458]: I0308 22:13:52.708692 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.708745 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.708782 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.708802 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.708863 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-textfile\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.708890 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-host-run-k8s-cni-cncf-io\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.708967 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709031 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-var-lib-openvswitch\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709033 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709064 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709136 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b358dcb7-d01f-4206-b636-b55a599a73bd-host-slash\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709240 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709385 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/385e69e4-d443-44bb-8ee4-578a1c902c62-os-release\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:52.709395 master-0 kubenswrapper[29458]: I0308 22:13:52.709401 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-node-pullsecrets\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709434 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-node-pullsecrets\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709458 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2nfk\" (UniqueName: \"kubernetes.io/projected/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-api-access-z2nfk\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709512 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709560 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709599 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709629 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-hosts-file\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709657 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709680 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/077643a2-ab2d-4f12-9abf-42a1c56d7aff-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709682 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-host\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709718 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3fbcd83-a3e1-4de1-aceb-2692d348e495-host\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:52.709746 master-0 kubenswrapper[29458]: I0308 22:13:52.709720 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:52.710052 master-0 kubenswrapper[29458]: I0308 22:13:52.709851 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-hosts-file\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 22:13:52.710052 master-0 kubenswrapper[29458]: I0308 22:13:52.709871 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-run-systemd\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.710261 master-0 kubenswrapper[29458]: I0308 22:13:52.710115 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 22:13:52.717673 master-0 kubenswrapper[29458]: I0308 22:13:52.717018 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-trusted-ca-bundle\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.723696 master-0 kubenswrapper[29458]: I0308 22:13:52.722440 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 22:13:52.730535 master-0 kubenswrapper[29458]: I0308 22:13:52.730413 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovn-node-metrics-cert\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.742323 master-0 kubenswrapper[29458]: I0308 22:13:52.742236 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 22:13:52.762632 master-0 kubenswrapper[29458]: I0308 22:13:52.762560 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 22:13:52.764790 master-0 kubenswrapper[29458]: I0308 22:13:52.764726 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.783354 master-0 kubenswrapper[29458]: I0308 22:13:52.783167 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 22:13:52.807193 master-0 kubenswrapper[29458]: I0308 22:13:52.807114 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 22:13:52.811258 master-0 kubenswrapper[29458]: I0308 22:13:52.811180 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.811583 master-0 kubenswrapper[29458]: I0308 22:13:52.811424 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.811690 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-root\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.811781 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-root\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.811816 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.811920 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-rootfs\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.811941 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-wtmp\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.812200 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.812204 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-rootfs\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.812279 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-wtmp\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.812395 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.812539 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.812683 master-0 kubenswrapper[29458]: I0308 22:13:52.812602 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-sys\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.812996 master-0 kubenswrapper[29458]: I0308 22:13:52.812713 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:52.813227 master-0 kubenswrapper[29458]: I0308 22:13:52.813203 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0269ed52-a753-49aa-9c38-c7aee23cebbd-sys\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:52.813445 master-0 kubenswrapper[29458]: I0308 22:13:52.813406 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d063b330-4180-43de-a248-c573183d96f1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.813571 master-0 kubenswrapper[29458]: I0308 22:13:52.813544 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d063b330-4180-43de-a248-c573183d96f1-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:52.813798 master-0 kubenswrapper[29458]: I0308 22:13:52.813777 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.813840 master-0 kubenswrapper[29458]: I0308 22:13:52.813817 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:52.821160 master-0 kubenswrapper[29458]: I0308 22:13:52.821126 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 22:13:52.830713 master-0 kubenswrapper[29458]: I0308 22:13:52.830425 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-ovnkube-script-lib\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:52.840886 master-0 kubenswrapper[29458]: I0308 22:13:52.840801 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 22:13:52.846020 master-0 kubenswrapper[29458]: I0308 22:13:52.845951 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e635b0da-956b-4636-bc9b-61f231241908-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kx9pl\" (UID: \"e635b0da-956b-4636-bc9b-61f231241908\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:13:52.860795 master-0 kubenswrapper[29458]: I0308 22:13:52.860730 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 22:13:52.870751 master-0 kubenswrapper[29458]: I0308 22:13:52.870192 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-audit\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.884202 master-0 kubenswrapper[29458]: I0308 22:13:52.884124 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 22:13:52.891648 master-0 kubenswrapper[29458]: I0308 22:13:52.891597 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-encryption-config\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.901171 master-0 kubenswrapper[29458]: I0308 22:13:52.901101 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 22:13:52.906961 master-0 kubenswrapper[29458]: I0308 22:13:52.906896 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-etcd-client\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:52.920565 master-0 kubenswrapper[29458]: I0308 22:13:52.920481 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 22:13:52.942352 master-0 kubenswrapper[29458]: I0308 22:13:52.942283 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 22:13:52.946162 master-0 kubenswrapper[29458]: I0308 22:13:52.946106 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb21214-292a-48ee-85e2-6b1e62f40cb4-config-volume\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:52.961892 master-0 kubenswrapper[29458]: I0308 22:13:52.961822 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 22:13:52.964193 master-0 kubenswrapper[29458]: I0308 22:13:52.964133 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:52.965433 master-0 kubenswrapper[29458]: I0308 22:13:52.965339 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0cb21214-292a-48ee-85e2-6b1e62f40cb4-metrics-tls\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:52.970472 master-0 kubenswrapper[29458]: I0308 22:13:52.970409 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 08 22:13:52.981925 master-0 kubenswrapper[29458]: I0308 22:13:52.981705 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 08 22:13:52.982999 master-0 kubenswrapper[29458]: I0308 22:13:52.982835 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 22:13:53.001645 master-0 kubenswrapper[29458]: I0308 22:13:53.001595 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 22:13:53.011492 master-0 kubenswrapper[29458]: I0308 22:13:53.011425 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b358dcb7-d01f-4206-b636-b55a599a73bd-iptables-alerter-script\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:53.021247 master-0 kubenswrapper[29458]: I0308 22:13:53.020798 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 22:13:53.041537 master-0 kubenswrapper[29458]: I0308 22:13:53.041463 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 22:13:53.052432 master-0 kubenswrapper[29458]: I0308 22:13:53.052351 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-serving-cert\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:53.061840 master-0 kubenswrapper[29458]: I0308 22:13:53.061784 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 22:13:53.083800 master-0 kubenswrapper[29458]: I0308 22:13:53.083736 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 22:13:53.085541 master-0 kubenswrapper[29458]: I0308 22:13:53.085486 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-default-certificate\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:53.101721 master-0 kubenswrapper[29458]: I0308 22:13:53.101655 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 22:13:53.110914 master-0 kubenswrapper[29458]: I0308 22:13:53.110828 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-stats-auth\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:53.121464 master-0 kubenswrapper[29458]: I0308 22:13:53.121420 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 22:13:53.131260 master-0 kubenswrapper[29458]: I0308 22:13:53.131199 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81f5ed55-225c-41e2-bc9d-b41063a604c9-metrics-certs\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:53.139958 master-0 kubenswrapper[29458]: I0308 22:13:53.139911 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 22:13:53.145405 master-0 kubenswrapper[29458]: I0308 22:13:53.145348 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-trusted-ca-bundle\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:53.160340 master-0 kubenswrapper[29458]: I0308 22:13:53.160288 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 22:13:53.164521 master-0 kubenswrapper[29458]: I0308 22:13:53.164472 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-encryption-config\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:53.180517 master-0 kubenswrapper[29458]: I0308 22:13:53.180483 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 22:13:53.201161 master-0 kubenswrapper[29458]: I0308 22:13:53.201059 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 22:13:53.220978 master-0 kubenswrapper[29458]: I0308 22:13:53.220890 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 22:13:53.241209 master-0 kubenswrapper[29458]: I0308 22:13:53.241020 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 22:13:53.247610 master-0 kubenswrapper[29458]: I0308 22:13:53.247531 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-audit-policies\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:53.261292 master-0 kubenswrapper[29458]: I0308 22:13:53.261203 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 22:13:53.268147 master-0 kubenswrapper[29458]: I0308 22:13:53.268028 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f5ed55-225c-41e2-bc9d-b41063a604c9-service-ca-bundle\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:53.282770 master-0 kubenswrapper[29458]: I0308 22:13:53.282697 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 22:13:53.285935 master-0 kubenswrapper[29458]: I0308 22:13:53.285892 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-serving-ca\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:53.301461 master-0 kubenswrapper[29458]: I0308 22:13:53.301331 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 22:13:53.304864 master-0 kubenswrapper[29458]: I0308 22:13:53.304815 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-etcd-client\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:53.320866 master-0 kubenswrapper[29458]: I0308 22:13:53.320785 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 22:13:53.325530 master-0 kubenswrapper[29458]: I0308 22:13:53.325477 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5afb146-31d7-4da9-8738-b6c15528233a-serving-cert\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:53.341778 master-0 kubenswrapper[29458]: I0308 22:13:53.341723 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 22:13:53.385900 master-0 kubenswrapper[29458]: I0308 22:13:53.385844 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 22:13:53.389045 master-0 kubenswrapper[29458]: I0308 22:13:53.387641 29458 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 22:13:53.405695 master-0 kubenswrapper[29458]: I0308 22:13:53.405634 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 22:13:53.406786 master-0 kubenswrapper[29458]: I0308 22:13:53.406752 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:53.407476 master-0 kubenswrapper[29458]: I0308 22:13:53.407444 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 22:13:53.407537 master-0 kubenswrapper[29458]: I0308 22:13:53.407491 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 22:13:53.407537 master-0 kubenswrapper[29458]: I0308 22:13:53.407504 29458 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 22:13:53.408930 master-0 kubenswrapper[29458]: I0308 22:13:53.408364 29458 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 22:13:53.419458 master-0 kubenswrapper[29458]: I0308 22:13:53.419371 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 22:13:53.426535 master-0 kubenswrapper[29458]: I0308 22:13:53.425027 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 22:13:53.426535 master-0 kubenswrapper[29458]: I0308 22:13:53.426318 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:53.455404 master-0 kubenswrapper[29458]: I0308 22:13:53.449525 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 22:13:53.469571 master-0 kubenswrapper[29458]: I0308 22:13:53.469341 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 22:13:53.478659 master-0 kubenswrapper[29458]: I0308 22:13:53.478602 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:53.480684 master-0 kubenswrapper[29458]: I0308 22:13:53.480653 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 22:13:53.493570 master-0 kubenswrapper[29458]: I0308 22:13:53.491794 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-serving-cert\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:53.500779 master-0 kubenswrapper[29458]: I0308 22:13:53.500719 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 22:13:53.503161 master-0 kubenswrapper[29458]: I0308 22:13:53.503111 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:53.510347 master-0 kubenswrapper[29458]: E0308 22:13:53.510304 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 22:13:53.516685 master-0 kubenswrapper[29458]: I0308 22:13:53.516638 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:53.518855 master-0 kubenswrapper[29458]: I0308 22:13:53.518828 29458 request.go:700] Waited for 1.005271065s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 08 22:13:53.520943 master-0 kubenswrapper[29458]: I0308 22:13:53.520922 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 22:13:53.530565 master-0 kubenswrapper[29458]: I0308 22:13:53.530515 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-service-ca\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:53.540571 master-0 kubenswrapper[29458]: I0308 22:13:53.540538 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 22:13:53.544350 master-0 kubenswrapper[29458]: I0308 22:13:53.544288 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:53.561490 master-0 kubenswrapper[29458]: I0308 22:13:53.561438 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 22:13:53.580638 master-0 kubenswrapper[29458]: I0308 22:13:53.580594 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 22:13:53.581175 master-0 kubenswrapper[29458]: E0308 22:13:53.581152 29458 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.581420 master-0 kubenswrapper[29458]: E0308 22:13:53.581401 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert podName:c228b17c-fd7b-4273-ac03-eac5d4a3a4ad nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.081381503 +0000 UTC m=+3.369439095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-p68k6" (UID: "c228b17c-fd7b-4273-ac03-eac5d4a3a4ad") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.581663 master-0 kubenswrapper[29458]: E0308 22:13:53.581320 29458 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.581760 master-0 kubenswrapper[29458]: E0308 22:13:53.581750 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca podName:2395900a-ff6b-46ff-92c6-a8a1b5675b67 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.081740423 +0000 UTC m=+3.369798015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca") pod "controller-manager-f7df5f5b-txsrq" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.586054 master-0 kubenswrapper[29458]: E0308 22:13:53.586027 29458 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.586219 master-0 kubenswrapper[29458]: E0308 22:13:53.586086 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert podName:d9fe466f-5a23-4f69-8a96-44bd5d6194f5 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.086064798 +0000 UTC m=+3.374122390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert") pod "cluster-autoscaler-operator-69576476f7-dvgxg" (UID: "d9fe466f-5a23-4f69-8a96-44bd5d6194f5") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.586369 master-0 kubenswrapper[29458]: E0308 22:13:53.586354 29458 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.586456 master-0 kubenswrapper[29458]: E0308 22:13:53.586445 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle podName:66e50eed-e3ac-431f-931b-7c4c848c491b nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.086432448 +0000 UTC m=+3.374490040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle") pod "insights-operator-8f89dfddd-fn4ck" (UID: "66e50eed-e3ac-431f-931b-7c4c848c491b") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.586553 master-0 kubenswrapper[29458]: E0308 22:13:53.586542 29458 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.586631 master-0 kubenswrapper[29458]: E0308 22:13:53.586622 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert podName:d9e9c931-9595-42f1-bbc2-c412286f6cd1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.086613633 +0000 UTC m=+3.374671225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert") pod "cluster-baremetal-operator-5cdb4c5598-xwmmm" (UID: "d9e9c931-9595-42f1-bbc2-c412286f6cd1") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.591416 master-0 kubenswrapper[29458]: E0308 22:13:53.591358 29458 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591531 master-0 kubenswrapper[29458]: E0308 22:13:53.591504 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config podName:da51940a-a38f-4baf-9c14-b2f1f46b5aed nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091472193 +0000 UTC m=+3.379529805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config") pod "route-controller-manager-86888d445f-7f74k" (UID: "da51940a-a38f-4baf-9c14-b2f1f46b5aed") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591587 master-0 kubenswrapper[29458]: E0308 22:13:53.591559 29458 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591615 master-0 kubenswrapper[29458]: E0308 22:13:53.591599 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles podName:2395900a-ff6b-46ff-92c6-a8a1b5675b67 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091590377 +0000 UTC m=+3.379647969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles") pod "controller-manager-f7df5f5b-txsrq" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591661 master-0 kubenswrapper[29458]: E0308 22:13:53.591645 29458 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591697 master-0 kubenswrapper[29458]: E0308 22:13:53.591671 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images podName:d9e9c931-9595-42f1-bbc2-c412286f6cd1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091664599 +0000 UTC m=+3.379722181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images") pod "cluster-baremetal-operator-5cdb4c5598-xwmmm" (UID: "d9e9c931-9595-42f1-bbc2-c412286f6cd1") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591730 master-0 kubenswrapper[29458]: E0308 22:13:53.591668 29458 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.591730 master-0 kubenswrapper[29458]: E0308 22:13:53.591723 29458 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591782 master-0 kubenswrapper[29458]: E0308 22:13:53.591747 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config podName:2395900a-ff6b-46ff-92c6-a8a1b5675b67 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091740501 +0000 UTC m=+3.379798093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config") pod "controller-manager-f7df5f5b-txsrq" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.591782 master-0 kubenswrapper[29458]: E0308 22:13:53.591772 29458 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.591840 master-0 kubenswrapper[29458]: E0308 22:13:53.591698 29458 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.591840 master-0 kubenswrapper[29458]: E0308 22:13:53.591810 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert podName:66e50eed-e3ac-431f-931b-7c4c848c491b nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091776982 +0000 UTC m=+3.379834664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert") pod "insights-operator-8f89dfddd-fn4ck" (UID: "66e50eed-e3ac-431f-931b-7c4c848c491b") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.591899 master-0 kubenswrapper[29458]: E0308 22:13:53.591849 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls podName:fd9abe2b-f829-4376-9abe-7da0a08770e7 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091839154 +0000 UTC m=+3.379896876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-mkvtk" (UID: "fd9abe2b-f829-4376-9abe-7da0a08770e7") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.591899 master-0 kubenswrapper[29458]: E0308 22:13:53.591871 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert podName:da51940a-a38f-4baf-9c14-b2f1f46b5aed nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.091861604 +0000 UTC m=+3.379919326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert") pod "route-controller-manager-86888d445f-7f74k" (UID: "da51940a-a38f-4baf-9c14-b2f1f46b5aed") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.592015 master-0 kubenswrapper[29458]: E0308 22:13:53.591999 29458 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.592179 master-0 kubenswrapper[29458]: E0308 22:13:53.592145 29458 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.592255 master-0 kubenswrapper[29458]: E0308 22:13:53.592241 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config podName:d9e9c931-9595-42f1-bbc2-c412286f6cd1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.092158743 +0000 UTC m=+3.380216335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config") pod "cluster-baremetal-operator-5cdb4c5598-xwmmm" (UID: "d9e9c931-9595-42f1-bbc2-c412286f6cd1") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.592331 master-0 kubenswrapper[29458]: E0308 22:13:53.592318 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls podName:d9e9c931-9595-42f1-bbc2-c412286f6cd1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.092307868 +0000 UTC m=+3.380365460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-xwmmm" (UID: "d9e9c931-9595-42f1-bbc2-c412286f6cd1") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.592710 master-0 kubenswrapper[29458]: E0308 22:13:53.592672 29458 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.592805 master-0 kubenswrapper[29458]: E0308 22:13:53.592774 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls podName:6eb502a1-db10-46ba-b698-461919464fb9 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.09274932 +0000 UTC m=+3.380806922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-c246n" (UID: "6eb502a1-db10-46ba-b698-461919464fb9") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.593603 master-0 kubenswrapper[29458]: E0308 22:13:53.593587 29458 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.593716 master-0 kubenswrapper[29458]: E0308 22:13:53.593705 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config podName:d9fe466f-5a23-4f69-8a96-44bd5d6194f5 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.093695118 +0000 UTC m=+3.381752710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-dvgxg" (UID: "d9fe466f-5a23-4f69-8a96-44bd5d6194f5") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.594767 master-0 kubenswrapper[29458]: E0308 22:13:53.594754 29458 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.594868 master-0 kubenswrapper[29458]: E0308 22:13:53.594856 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert podName:2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.09484703 +0000 UTC m=+3.382904622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-mfqlz" (UID: "2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.594933 master-0 kubenswrapper[29458]: E0308 22:13:53.594768 29458 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.594978 master-0 kubenswrapper[29458]: E0308 22:13:53.594970 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca podName:2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.094957293 +0000 UTC m=+3.383014975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-mfqlz" (UID: "2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.595848 master-0 kubenswrapper[29458]: E0308 22:13:53.595825 29458 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.595891 master-0 kubenswrapper[29458]: E0308 22:13:53.595881 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert podName:2395900a-ff6b-46ff-92c6-a8a1b5675b67 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.09587076 +0000 UTC m=+3.383928352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert") pod "controller-manager-f7df5f5b-txsrq" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.595985 master-0 kubenswrapper[29458]: E0308 22:13:53.595969 29458 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.596039 master-0 kubenswrapper[29458]: E0308 22:13:53.596029 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle podName:66e50eed-e3ac-431f-931b-7c4c848c491b nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.096020884 +0000 UTC m=+3.384078476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle") pod "insights-operator-8f89dfddd-fn4ck" (UID: "66e50eed-e3ac-431f-931b-7c4c848c491b") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.601364 master-0 kubenswrapper[29458]: I0308 22:13:53.601316 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 22:13:53.620975 master-0 kubenswrapper[29458]: I0308 22:13:53.620909 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 22:13:53.633539 master-0 kubenswrapper[29458]: I0308 22:13:53.633475 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") pod \"1d188983-1f19-4c8e-b604-034bd6308139\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " Mar 08 22:13:53.633757 master-0 kubenswrapper[29458]: I0308 22:13:53.633688 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") pod \"1d188983-1f19-4c8e-b604-034bd6308139\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " Mar 08 22:13:53.633888 master-0 kubenswrapper[29458]: I0308 22:13:53.633864 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock" (OuterVolumeSpecName: "var-lock") pod "1d188983-1f19-4c8e-b604-034bd6308139" (UID: "1d188983-1f19-4c8e-b604-034bd6308139"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:53.634029 master-0 kubenswrapper[29458]: I0308 22:13:53.633976 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1d188983-1f19-4c8e-b604-034bd6308139" (UID: "1d188983-1f19-4c8e-b604-034bd6308139"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:53.635708 master-0 kubenswrapper[29458]: I0308 22:13:53.635665 29458 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:53.635708 master-0 kubenswrapper[29458]: I0308 22:13:53.635706 29458 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d188983-1f19-4c8e-b604-034bd6308139-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:53.640718 master-0 kubenswrapper[29458]: I0308 22:13:53.640685 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 22:13:53.659715 master-0 kubenswrapper[29458]: I0308 22:13:53.659655 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-j75vf" Mar 08 22:13:53.680768 master-0 kubenswrapper[29458]: I0308 22:13:53.680700 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 22:13:53.699430 master-0 kubenswrapper[29458]: E0308 22:13:53.699367 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.699728 master-0 kubenswrapper[29458]: E0308 22:13:53.699482 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.199462082 +0000 UTC m=+3.487519674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.699728 master-0 kubenswrapper[29458]: E0308 22:13:53.699714 29458 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.699826 master-0 kubenswrapper[29458]: E0308 22:13:53.699754 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls podName:b6bc6f78-2c5c-4add-925f-f6568a49c2cc nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.199744619 +0000 UTC m=+3.487802211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls") pod "machine-config-controller-ff46b7bdf-zn77m" (UID: "b6bc6f78-2c5c-4add-925f-f6568a49c2cc") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.699826 master-0 kubenswrapper[29458]: E0308 22:13:53.699791 29458 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.699826 master-0 kubenswrapper[29458]: E0308 22:13:53.699821 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls podName:3e38e989-41b8-4c80-99fb-8d414dda5da1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.199813771 +0000 UTC m=+3.487871363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls") pod "machine-config-operator-fdb5c78b5-m7phf" (UID: "3e38e989-41b8-4c80-99fb-8d414dda5da1") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.700056 master-0 kubenswrapper[29458]: E0308 22:13:53.700016 29458 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.700246 master-0 kubenswrapper[29458]: E0308 22:13:53.700229 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs podName:00db426a-15d4-4737-a85e-b4cf6362c759 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.200203073 +0000 UTC m=+3.488260735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs") pod "multus-admission-controller-7769569c45-9lhn8" (UID: "00db426a-15d4-4737-a85e-b4cf6362c759") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.700430 master-0 kubenswrapper[29458]: E0308 22:13:53.700378 29458 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.700491 master-0 kubenswrapper[29458]: E0308 22:13:53.700483 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config podName:1ef14467-bb62-462d-9dec-dee43e4cc9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.200464021 +0000 UTC m=+3.488521613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config") pod "machine-api-operator-84bf6db4f9-64gfj" (UID: "1ef14467-bb62-462d-9dec-dee43e4cc9bd") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.700990 master-0 kubenswrapper[29458]: E0308 22:13:53.700953 29458 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.701055 master-0 kubenswrapper[29458]: E0308 22:13:53.701009 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images podName:1ef14467-bb62-462d-9dec-dee43e4cc9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.200998626 +0000 UTC m=+3.489056318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images") pod "machine-api-operator-84bf6db4f9-64gfj" (UID: "1ef14467-bb62-462d-9dec-dee43e4cc9bd") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.701463 master-0 kubenswrapper[29458]: I0308 22:13:53.701428 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 22:13:53.701646 master-0 kubenswrapper[29458]: E0308 22:13:53.701614 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.701694 master-0 kubenswrapper[29458]: E0308 22:13:53.701660 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.201649175 +0000 UTC m=+3.489706887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.701747 master-0 kubenswrapper[29458]: E0308 22:13:53.701701 29458 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.701794 master-0 kubenswrapper[29458]: E0308 22:13:53.701773 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls podName:4cbc6c17-7c16-435f-9399-b6f1ddb6d17f nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.201763358 +0000 UTC m=+3.489821070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls") pod "machine-approver-754bdc9f9d-stxvg" (UID: "4cbc6c17-7c16-435f-9399-b6f1ddb6d17f") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.701900 master-0 kubenswrapper[29458]: E0308 22:13:53.701855 29458 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.701951 master-0 kubenswrapper[29458]: E0308 22:13:53.701924 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images podName:3e38e989-41b8-4c80-99fb-8d414dda5da1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.201913922 +0000 UTC m=+3.489971634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images") pod "machine-config-operator-fdb5c78b5-m7phf" (UID: "3e38e989-41b8-4c80-99fb-8d414dda5da1") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.702743 master-0 kubenswrapper[29458]: E0308 22:13:53.702723 29458 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.702903 master-0 kubenswrapper[29458]: E0308 22:13:53.702889 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config podName:7868a4fb-af89-4bdc-b41b-31f4ee59b5f3 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.202874899 +0000 UTC m=+3.490932571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config") pod "machine-config-daemon-q669r" (UID: "7868a4fb-af89-4bdc-b41b-31f4ee59b5f3") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.703063 master-0 kubenswrapper[29458]: E0308 22:13:53.703025 29458 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.703143 master-0 kubenswrapper[29458]: E0308 22:13:53.703099 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config podName:0269ed52-a753-49aa-9c38-c7aee23cebbd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.203088186 +0000 UTC m=+3.491145868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config") pod "node-exporter-l8k5g" (UID: "0269ed52-a753-49aa-9c38-c7aee23cebbd") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.703143 master-0 kubenswrapper[29458]: E0308 22:13:53.703025 29458 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.703232 master-0 kubenswrapper[29458]: E0308 22:13:53.703145 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config podName:c377685c-2024-4ef7-932d-5858eeb0d9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.203137298 +0000 UTC m=+3.491195000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-8rbn8" (UID: "c377685c-2024-4ef7-932d-5858eeb0d9bd") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.704848 master-0 kubenswrapper[29458]: E0308 22:13:53.704801 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.704933 master-0 kubenswrapper[29458]: E0308 22:13:53.704833 29458 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.704977 master-0 kubenswrapper[29458]: E0308 22:13:53.704857 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.204846717 +0000 UTC m=+3.492904419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.704977 master-0 kubenswrapper[29458]: E0308 22:13:53.704962 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls podName:c377685c-2024-4ef7-932d-5858eeb0d9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.204946419 +0000 UTC m=+3.493004011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-8rbn8" (UID: "c377685c-2024-4ef7-932d-5858eeb0d9bd") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705061 master-0 kubenswrapper[29458]: E0308 22:13:53.704999 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.705061 master-0 kubenswrapper[29458]: E0308 22:13:53.705031 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca podName:8a7e92d4-b7ed-408b-b7cf-00209a627bea nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.205023122 +0000 UTC m=+3.493080834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca") pod "prometheus-operator-5ff8674d55-jd2m9" (UID: "8a7e92d4-b7ed-408b-b7cf-00209a627bea") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.705061 master-0 kubenswrapper[29458]: E0308 22:13:53.705057 29458 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705203 master-0 kubenswrapper[29458]: E0308 22:13:53.705106 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.205098654 +0000 UTC m=+3.493156366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705203 master-0 kubenswrapper[29458]: E0308 22:13:53.705128 29458 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705203 master-0 kubenswrapper[29458]: E0308 22:13:53.705157 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls podName:0269ed52-a753-49aa-9c38-c7aee23cebbd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.205148625 +0000 UTC m=+3.493206347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls") pod "node-exporter-l8k5g" (UID: "0269ed52-a753-49aa-9c38-c7aee23cebbd") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705203 master-0 kubenswrapper[29458]: E0308 22:13:53.705179 29458 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705203 master-0 kubenswrapper[29458]: E0308 22:13:53.705207 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls podName:7868a4fb-af89-4bdc-b41b-31f4ee59b5f3 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.205199647 +0000 UTC m=+3.493257249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls") pod "machine-config-daemon-q669r" (UID: "7868a4fb-af89-4bdc-b41b-31f4ee59b5f3") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705390 master-0 kubenswrapper[29458]: E0308 22:13:53.705212 29458 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705390 master-0 kubenswrapper[29458]: E0308 22:13:53.705243 29458 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.705390 master-0 kubenswrapper[29458]: E0308 22:13:53.705275 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls podName:1ef14467-bb62-462d-9dec-dee43e4cc9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.205249778 +0000 UTC m=+3.493307450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-64gfj" (UID: "1ef14467-bb62-462d-9dec-dee43e4cc9bd") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.705390 master-0 kubenswrapper[29458]: E0308 22:13:53.705302 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config podName:b6bc6f78-2c5c-4add-925f-f6568a49c2cc nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.205291949 +0000 UTC m=+3.493349661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-zn77m" (UID: "b6bc6f78-2c5c-4add-925f-f6568a49c2cc") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.706447 master-0 kubenswrapper[29458]: E0308 22:13:53.706407 29458 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.706447 master-0 kubenswrapper[29458]: E0308 22:13:53.706433 29458 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.706554 master-0 kubenswrapper[29458]: E0308 22:13:53.706462 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config podName:d063b330-4180-43de-a248-c573183d96f1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.206449853 +0000 UTC m=+3.494507445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" (UID: "d063b330-4180-43de-a248-c573183d96f1") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.706554 master-0 kubenswrapper[29458]: E0308 22:13:53.706489 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token podName:4b5246dc-b715-4678-a3a9-878df57dd236 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.206478064 +0000 UTC m=+3.494535656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token") pod "machine-config-server-svxwz" (UID: "4b5246dc-b715-4678-a3a9-878df57dd236") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.706554 master-0 kubenswrapper[29458]: E0308 22:13:53.706488 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.706554 master-0 kubenswrapper[29458]: E0308 22:13:53.706521 29458 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.706554 master-0 kubenswrapper[29458]: E0308 22:13:53.706494 29458 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.706750 master-0 kubenswrapper[29458]: E0308 22:13:53.706551 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca podName:0269ed52-a753-49aa-9c38-c7aee23cebbd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.206537285 +0000 UTC m=+3.494594987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca") pod "node-exporter-l8k5g" (UID: "0269ed52-a753-49aa-9c38-c7aee23cebbd") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.706750 master-0 kubenswrapper[29458]: E0308 22:13:53.706604 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.206594967 +0000 UTC m=+3.494652669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.706750 master-0 kubenswrapper[29458]: E0308 22:13:53.706617 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.206611447 +0000 UTC m=+3.494669159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.707382 master-0 kubenswrapper[29458]: E0308 22:13:53.707357 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.707442 master-0 kubenswrapper[29458]: E0308 22:13:53.707393 29458 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.707442 master-0 kubenswrapper[29458]: E0308 22:13:53.707406 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.2073951 +0000 UTC m=+3.495452802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.707530 master-0 kubenswrapper[29458]: E0308 22:13:53.707445 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config podName:3e38e989-41b8-4c80-99fb-8d414dda5da1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.207424931 +0000 UTC m=+3.495482523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-m7phf" (UID: "3e38e989-41b8-4c80-99fb-8d414dda5da1") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.707578 master-0 kubenswrapper[29458]: E0308 22:13:53.707537 29458 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.707622 master-0 kubenswrapper[29458]: E0308 22:13:53.707581 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert podName:4e2eb05c-eaa5-4d9b-abad-c0ef6835087e nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.207573575 +0000 UTC m=+3.495631167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert") pod "packageserver-f988cd549-68kmh" (UID: "4e2eb05c-eaa5-4d9b-abad-c0ef6835087e") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.707622 master-0 kubenswrapper[29458]: E0308 22:13:53.707604 29458 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.707698 master-0 kubenswrapper[29458]: E0308 22:13:53.707629 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config podName:8a7e92d4-b7ed-408b-b7cf-00209a627bea nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.207621767 +0000 UTC m=+3.495679359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-jd2m9" (UID: "8a7e92d4-b7ed-408b-b7cf-00209a627bea") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708022 master-0 kubenswrapper[29458]: E0308 22:13:53.707988 29458 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708106 master-0 kubenswrapper[29458]: E0308 22:13:53.708054 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208042628 +0000 UTC m=+3.496100310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708275 master-0 kubenswrapper[29458]: E0308 22:13:53.708251 29458 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708376 master-0 kubenswrapper[29458]: E0308 22:13:53.708352 29458 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.708423 master-0 kubenswrapper[29458]: E0308 22:13:53.708275 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.708462 master-0 kubenswrapper[29458]: E0308 22:13:53.708294 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.708462 master-0 kubenswrapper[29458]: E0308 22:13:53.708303 29458 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708539 master-0 kubenswrapper[29458]: E0308 22:13:53.708315 29458 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708539 master-0 kubenswrapper[29458]: E0308 22:13:53.708499 29458 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-dv1om8r64ct8c: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708539 master-0 kubenswrapper[29458]: E0308 22:13:53.708516 29458 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708539 master-0 kubenswrapper[29458]: E0308 22:13:53.708534 29458 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708539 master-0 kubenswrapper[29458]: E0308 22:13:53.708517 29458 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708772 master-0 kubenswrapper[29458]: E0308 22:13:53.708755 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208729669 +0000 UTC m=+3.496787311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708874 master-0 kubenswrapper[29458]: E0308 22:13:53.708846 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208834052 +0000 UTC m=+3.496891644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708874 master-0 kubenswrapper[29458]: E0308 22:13:53.708870 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images podName:d063b330-4180-43de-a248-c573183d96f1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208864032 +0000 UTC m=+3.496921624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" (UID: "d063b330-4180-43de-a248-c573183d96f1") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.708973 master-0 kubenswrapper[29458]: E0308 22:13:53.708883 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208876833 +0000 UTC m=+3.496934425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.708973 master-0 kubenswrapper[29458]: E0308 22:13:53.708902 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208890143 +0000 UTC m=+3.496947735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.708973 master-0 kubenswrapper[29458]: E0308 22:13:53.708921 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs podName:4b5246dc-b715-4678-a3a9-878df57dd236 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208915214 +0000 UTC m=+3.496972806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs") pod "machine-config-server-svxwz" (UID: "4b5246dc-b715-4678-a3a9-878df57dd236") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708973 master-0 kubenswrapper[29458]: E0308 22:13:53.708939 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert podName:4e2eb05c-eaa5-4d9b-abad-c0ef6835087e nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208932944 +0000 UTC m=+3.496990536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert") pod "packageserver-f988cd549-68kmh" (UID: "4e2eb05c-eaa5-4d9b-abad-c0ef6835087e") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708973 master-0 kubenswrapper[29458]: E0308 22:13:53.708954 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208950285 +0000 UTC m=+3.497007867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.708973 master-0 kubenswrapper[29458]: E0308 22:13:53.708966 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls podName:8a7e92d4-b7ed-408b-b7cf-00209a627bea nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208960305 +0000 UTC m=+3.497017897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-jd2m9" (UID: "8a7e92d4-b7ed-408b-b7cf-00209a627bea") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.709236 master-0 kubenswrapper[29458]: E0308 22:13:53.708978 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls podName:d063b330-4180-43de-a248-c573183d96f1 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.208971875 +0000 UTC m=+3.497029467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" (UID: "d063b330-4180-43de-a248-c573183d96f1") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.709771 master-0 kubenswrapper[29458]: E0308 22:13:53.709731 29458 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709829 master-0 kubenswrapper[29458]: E0308 22:13:53.709786 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709829 master-0 kubenswrapper[29458]: E0308 22:13:53.709805 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config podName:4cbc6c17-7c16-435f-9399-b6f1ddb6d17f nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.209792328 +0000 UTC m=+3.497850020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config") pod "machine-approver-754bdc9f9d-stxvg" (UID: "4cbc6c17-7c16-435f-9399-b6f1ddb6d17f") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709829 master-0 kubenswrapper[29458]: E0308 22:13:53.709814 29458 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709829 master-0 kubenswrapper[29458]: E0308 22:13:53.709825 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.209815709 +0000 UTC m=+3.497873301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709838 29458 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709852 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config podName:4cbc6c17-7c16-435f-9399-b6f1ddb6d17f nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.20984306 +0000 UTC m=+3.497900652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config") pod "machine-approver-754bdc9f9d-stxvg" (UID: "4cbc6c17-7c16-435f-9399-b6f1ddb6d17f") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709877 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.209868261 +0000 UTC m=+3.497925993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709884 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709908 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca podName:c377685c-2024-4ef7-932d-5858eeb0d9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.209901822 +0000 UTC m=+3.497959414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-8rbn8" (UID: "c377685c-2024-4ef7-932d-5858eeb0d9bd") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709910 29458 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.709973 master-0 kubenswrapper[29458]: E0308 22:13:53.709942 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:54.209934743 +0000 UTC m=+3.497992465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:53.720134 master-0 kubenswrapper[29458]: I0308 22:13:53.720050 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 22:13:53.749590 master-0 kubenswrapper[29458]: I0308 22:13:53.749254 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 22:13:53.760011 master-0 kubenswrapper[29458]: I0308 22:13:53.759968 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 22:13:53.783816 master-0 kubenswrapper[29458]: I0308 22:13:53.783771 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 22:13:53.799847 master-0 kubenswrapper[29458]: I0308 22:13:53.799784 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 22:13:53.819614 master-0 kubenswrapper[29458]: I0308 22:13:53.819560 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 22:13:53.840721 master-0 kubenswrapper[29458]: I0308 22:13:53.840675 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 22:13:53.861821 master-0 kubenswrapper[29458]: I0308 22:13:53.860512 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 22:13:53.881721 master-0 kubenswrapper[29458]: I0308 22:13:53.881655 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-tqhmq" Mar 08 22:13:53.901041 master-0 kubenswrapper[29458]: I0308 22:13:53.900987 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 22:13:53.921832 master-0 kubenswrapper[29458]: I0308 22:13:53.921509 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 22:13:53.941259 master-0 kubenswrapper[29458]: I0308 22:13:53.941012 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 22:13:53.963102 master-0 kubenswrapper[29458]: I0308 22:13:53.961782 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-7kdzp" Mar 08 22:13:53.981110 master-0 kubenswrapper[29458]: I0308 22:13:53.980588 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 22:13:54.006940 master-0 kubenswrapper[29458]: I0308 22:13:54.006789 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 22:13:54.022685 master-0 kubenswrapper[29458]: I0308 22:13:54.022659 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 22:13:54.042200 master-0 kubenswrapper[29458]: I0308 22:13:54.042160 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-hlmng" Mar 08 22:13:54.064456 master-0 kubenswrapper[29458]: I0308 22:13:54.064407 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-b4pnr" Mar 08 22:13:54.080568 master-0 kubenswrapper[29458]: I0308 22:13:54.080507 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 22:13:54.106134 master-0 kubenswrapper[29458]: I0308 22:13:54.105636 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 22:13:54.120898 master-0 kubenswrapper[29458]: I0308 22:13:54.120851 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lvhnl" Mar 08 22:13:54.140562 master-0 kubenswrapper[29458]: I0308 22:13:54.140510 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 22:13:54.149617 master-0 kubenswrapper[29458]: I0308 22:13:54.149570 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.150009 master-0 kubenswrapper[29458]: I0308 22:13:54.149960 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.150140 master-0 kubenswrapper[29458]: I0308 22:13:54.150107 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:13:54.150321 master-0 kubenswrapper[29458]: I0308 22:13:54.150289 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:54.150392 master-0 kubenswrapper[29458]: I0308 22:13:54.150337 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:54.150392 master-0 kubenswrapper[29458]: I0308 22:13:54.150387 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.150472 master-0 kubenswrapper[29458]: I0308 22:13:54.150452 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.150515 master-0 kubenswrapper[29458]: I0308 22:13:54.150503 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 22:13:54.150560 master-0 kubenswrapper[29458]: I0308 22:13:54.150524 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.150560 master-0 kubenswrapper[29458]: I0308 22:13:54.150548 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:54.150635 master-0 kubenswrapper[29458]: I0308 22:13:54.150614 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.150725 master-0 kubenswrapper[29458]: I0308 22:13:54.150692 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.150768 master-0 kubenswrapper[29458]: I0308 22:13:54.150751 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:54.150811 master-0 kubenswrapper[29458]: I0308 22:13:54.150786 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:54.150929 master-0 kubenswrapper[29458]: I0308 22:13:54.150901 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.150986 master-0 kubenswrapper[29458]: I0308 22:13:54.150957 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 22:13:54.151149 master-0 kubenswrapper[29458]: I0308 22:13:54.151129 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:54.151435 master-0 kubenswrapper[29458]: I0308 22:13:54.151411 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eb502a1-db10-46ba-b698-461919464fb9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 22:13:54.151724 master-0 kubenswrapper[29458]: I0308 22:13:54.151546 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:54.151724 master-0 kubenswrapper[29458]: I0308 22:13:54.151601 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd9abe2b-f829-4376-9abe-7da0a08770e7-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 22:13:54.151724 master-0 kubenswrapper[29458]: I0308 22:13:54.151618 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.151724 master-0 kubenswrapper[29458]: I0308 22:13:54.151632 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:54.151961 master-0 kubenswrapper[29458]: I0308 22:13:54.151931 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.152009 master-0 kubenswrapper[29458]: I0308 22:13:54.151975 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:54.152009 master-0 kubenswrapper[29458]: I0308 22:13:54.152001 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:54.152133 master-0 kubenswrapper[29458]: I0308 22:13:54.152015 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.152174 master-0 kubenswrapper[29458]: I0308 22:13:54.152153 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.152212 master-0 kubenswrapper[29458]: I0308 22:13:54.152193 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:54.152306 master-0 kubenswrapper[29458]: I0308 22:13:54.152281 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:54.152370 master-0 kubenswrapper[29458]: I0308 22:13:54.152313 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:54.152370 master-0 kubenswrapper[29458]: I0308 22:13:54.152359 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:54.152448 master-0 kubenswrapper[29458]: I0308 22:13:54.152378 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9e9c931-9595-42f1-bbc2-c412286f6cd1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.152525 master-0 kubenswrapper[29458]: I0308 22:13:54.152477 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:54.160500 master-0 kubenswrapper[29458]: I0308 22:13:54.160456 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-g8h2t" Mar 08 22:13:54.180539 master-0 kubenswrapper[29458]: I0308 22:13:54.180481 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 22:13:54.183504 master-0 kubenswrapper[29458]: I0308 22:13:54.183469 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-cert\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:54.201582 master-0 kubenswrapper[29458]: I0308 22:13:54.201327 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 22:13:54.203403 master-0 kubenswrapper[29458]: I0308 22:13:54.203352 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:54.220936 master-0 kubenswrapper[29458]: I0308 22:13:54.220867 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 22:13:54.250349 master-0 kubenswrapper[29458]: I0308 22:13:54.248402 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 22:13:54.253307 master-0 kubenswrapper[29458]: I0308 22:13:54.253261 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e50eed-e3ac-431f-931b-7c4c848c491b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:54.253712 master-0 kubenswrapper[29458]: I0308 22:13:54.253656 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:54.253783 master-0 kubenswrapper[29458]: I0308 22:13:54.253757 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:54.253938 master-0 kubenswrapper[29458]: I0308 22:13:54.253917 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:54.254009 master-0 kubenswrapper[29458]: I0308 22:13:54.253956 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:54.254052 master-0 kubenswrapper[29458]: I0308 22:13:54.254023 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:54.254143 master-0 kubenswrapper[29458]: I0308 22:13:54.254120 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:54.254315 master-0 kubenswrapper[29458]: I0308 22:13:54.254292 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.254451 master-0 kubenswrapper[29458]: I0308 22:13:54.254430 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.254600 master-0 kubenswrapper[29458]: I0308 22:13:54.254581 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:54.254741 master-0 kubenswrapper[29458]: I0308 22:13:54.254723 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:54.254869 master-0 kubenswrapper[29458]: I0308 22:13:54.254851 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:54.254980 master-0 kubenswrapper[29458]: I0308 22:13:54.254963 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:54.255114 master-0 kubenswrapper[29458]: I0308 22:13:54.255096 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:54.255268 master-0 kubenswrapper[29458]: I0308 22:13:54.255251 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:54.255399 master-0 kubenswrapper[29458]: I0308 22:13:54.255380 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:54.255504 master-0 kubenswrapper[29458]: I0308 22:13:54.255486 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.255605 master-0 kubenswrapper[29458]: I0308 22:13:54.255586 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:54.255773 master-0 kubenswrapper[29458]: I0308 22:13:54.255757 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.255892 master-0 kubenswrapper[29458]: I0308 22:13:54.255875 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:54.256008 master-0 kubenswrapper[29458]: I0308 22:13:54.255991 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:54.256242 master-0 kubenswrapper[29458]: I0308 22:13:54.256218 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:54.256338 master-0 kubenswrapper[29458]: I0308 22:13:54.256267 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:54.256338 master-0 kubenswrapper[29458]: I0308 22:13:54.256302 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.256433 master-0 kubenswrapper[29458]: I0308 22:13:54.256360 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:54.256433 master-0 kubenswrapper[29458]: I0308 22:13:54.256400 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:54.256526 master-0 kubenswrapper[29458]: I0308 22:13:54.256435 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:54.256526 master-0 kubenswrapper[29458]: I0308 22:13:54.256480 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:54.256526 master-0 kubenswrapper[29458]: I0308 22:13:54.256514 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:54.256646 master-0 kubenswrapper[29458]: I0308 22:13:54.256544 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:54.256646 master-0 kubenswrapper[29458]: I0308 22:13:54.256581 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:54.256646 master-0 kubenswrapper[29458]: I0308 22:13:54.256608 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:54.256646 master-0 kubenswrapper[29458]: I0308 22:13:54.256634 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:54.256797 master-0 kubenswrapper[29458]: I0308 22:13:54.256659 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.256797 master-0 kubenswrapper[29458]: I0308 22:13:54.256741 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:54.256880 master-0 kubenswrapper[29458]: I0308 22:13:54.256810 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:54.256880 master-0 kubenswrapper[29458]: I0308 22:13:54.256841 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:54.256880 master-0 kubenswrapper[29458]: I0308 22:13:54.256867 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:54.257048 master-0 kubenswrapper[29458]: I0308 22:13:54.256902 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:54.257048 master-0 kubenswrapper[29458]: I0308 22:13:54.256943 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:54.257048 master-0 kubenswrapper[29458]: I0308 22:13:54.256990 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:54.257048 master-0 kubenswrapper[29458]: I0308 22:13:54.257029 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:54.257241 master-0 kubenswrapper[29458]: I0308 22:13:54.257066 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:13:54.257241 master-0 kubenswrapper[29458]: I0308 22:13:54.257140 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:54.257325 master-0 kubenswrapper[29458]: I0308 22:13:54.257263 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:54.257375 master-0 kubenswrapper[29458]: I0308 22:13:54.257322 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:54.257421 master-0 kubenswrapper[29458]: I0308 22:13:54.257378 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.260425 master-0 kubenswrapper[29458]: I0308 22:13:54.260390 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 22:13:54.262706 master-0 kubenswrapper[29458]: I0308 22:13:54.262644 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-config\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.296790 master-0 kubenswrapper[29458]: I0308 22:13:54.296709 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 22:13:54.301731 master-0 kubenswrapper[29458]: I0308 22:13:54.301677 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 22:13:54.302269 master-0 kubenswrapper[29458]: I0308 22:13:54.302234 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66e50eed-e3ac-431f-931b-7c4c848c491b-serving-cert\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:54.302401 master-0 kubenswrapper[29458]: I0308 22:13:54.302372 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e9c931-9595-42f1-bbc2-c412286f6cd1-images\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:54.319779 master-0 kubenswrapper[29458]: I0308 22:13:54.319736 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 22:13:54.340523 master-0 kubenswrapper[29458]: I0308 22:13:54.340471 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c5hcb" Mar 08 22:13:54.360234 master-0 kubenswrapper[29458]: I0308 22:13:54.360173 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 22:13:54.362617 master-0 kubenswrapper[29458]: I0308 22:13:54.362576 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:13:54.400932 master-0 kubenswrapper[29458]: I0308 22:13:54.400892 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lmwn6" Mar 08 22:13:54.419684 master-0 kubenswrapper[29458]: I0308 22:13:54.419641 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-6lw8c" Mar 08 22:13:54.439907 master-0 kubenswrapper[29458]: I0308 22:13:54.439835 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mqlfp" Mar 08 22:13:54.461510 master-0 kubenswrapper[29458]: I0308 22:13:54.461442 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-qdmfw" Mar 08 22:13:54.480386 master-0 kubenswrapper[29458]: I0308 22:13:54.480324 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 22:13:54.489483 master-0 kubenswrapper[29458]: I0308 22:13:54.489429 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-webhook-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:54.489748 master-0 kubenswrapper[29458]: I0308 22:13:54.489696 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-apiservice-cert\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:54.499807 master-0 kubenswrapper[29458]: I0308 22:13:54.499766 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-fk6p8" Mar 08 22:13:54.510298 master-0 kubenswrapper[29458]: I0308 22:13:54.510169 29458 generic.go:334] "Generic (PLEG): container finished" podID="345ca27a-f572-4efa-b0ce-dfa8243becd6" containerID="e63666c422a16c752beb8b0b06fe877b0b08af534810c31f0c885141cf9254a6" exitCode=0 Mar 08 22:13:54.510298 master-0 kubenswrapper[29458]: I0308 22:13:54.510278 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:54.520647 master-0 kubenswrapper[29458]: I0308 22:13:54.520600 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 22:13:54.525271 master-0 kubenswrapper[29458]: I0308 22:13:54.525213 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ef14467-bb62-462d-9dec-dee43e4cc9bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:54.538477 master-0 kubenswrapper[29458]: I0308 22:13:54.538415 29458 request.go:700] Waited for 2.000097469s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Mar 08 22:13:54.540018 master-0 kubenswrapper[29458]: I0308 22:13:54.539987 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 22:13:54.548358 master-0 kubenswrapper[29458]: I0308 22:13:54.548315 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-config\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:54.560920 master-0 kubenswrapper[29458]: I0308 22:13:54.560864 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 22:13:54.569185 master-0 kubenswrapper[29458]: I0308 22:13:54.569138 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ef14467-bb62-462d-9dec-dee43e4cc9bd-images\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:54.580445 master-0 kubenswrapper[29458]: I0308 22:13:54.580406 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 22:13:54.590759 master-0 kubenswrapper[29458]: I0308 22:13:54.590690 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e38e989-41b8-4c80-99fb-8d414dda5da1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:54.600880 master-0 kubenswrapper[29458]: I0308 22:13:54.600804 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 22:13:54.620028 master-0 kubenswrapper[29458]: I0308 22:13:54.619958 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 22:13:54.625181 master-0 kubenswrapper[29458]: I0308 22:13:54.625115 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-images\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:54.641546 master-0 kubenswrapper[29458]: I0308 22:13:54.641465 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 22:13:54.648660 master-0 kubenswrapper[29458]: I0308 22:13:54.645237 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-mcd-auth-proxy-config\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:54.648660 master-0 kubenswrapper[29458]: I0308 22:13:54.645832 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:54.648660 master-0 kubenswrapper[29458]: I0308 22:13:54.646871 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3e38e989-41b8-4c80-99fb-8d414dda5da1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:54.661364 master-0 kubenswrapper[29458]: I0308 22:13:54.661195 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 22:13:54.682038 master-0 kubenswrapper[29458]: I0308 22:13:54.681955 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-4m8r8" Mar 08 22:13:54.701164 master-0 kubenswrapper[29458]: I0308 22:13:54.701095 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 22:13:54.706895 master-0 kubenswrapper[29458]: I0308 22:13:54.706825 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-node-bootstrap-token\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:54.720548 master-0 kubenswrapper[29458]: I0308 22:13:54.720438 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 22:13:54.740157 master-0 kubenswrapper[29458]: I0308 22:13:54.740090 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 22:13:54.749733 master-0 kubenswrapper[29458]: I0308 22:13:54.749646 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b5246dc-b715-4678-a3a9-878df57dd236-certs\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:54.761814 master-0 kubenswrapper[29458]: I0308 22:13:54.761678 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-clq2r" Mar 08 22:13:54.780692 master-0 kubenswrapper[29458]: I0308 22:13:54.780603 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-cgg74" Mar 08 22:13:54.800893 master-0 kubenswrapper[29458]: I0308 22:13:54.800817 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 22:13:54.809454 master-0 kubenswrapper[29458]: I0308 22:13:54.809387 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:54.821353 master-0 kubenswrapper[29458]: I0308 22:13:54.821244 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 22:13:54.829324 master-0 kubenswrapper[29458]: I0308 22:13:54.829264 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:54.842436 master-0 kubenswrapper[29458]: I0308 22:13:54.842365 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 22:13:54.848104 master-0 kubenswrapper[29458]: I0308 22:13:54.847993 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-config\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:54.861582 master-0 kubenswrapper[29458]: I0308 22:13:54.861468 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 22:13:54.879850 master-0 kubenswrapper[29458]: I0308 22:13:54.879746 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 22:13:54.888520 master-0 kubenswrapper[29458]: I0308 22:13:54.888447 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:54.900417 master-0 kubenswrapper[29458]: I0308 22:13:54.900351 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 22:13:54.906427 master-0 kubenswrapper[29458]: I0308 22:13:54.906369 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:54.921052 master-0 kubenswrapper[29458]: I0308 22:13:54.920954 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 22:13:54.925847 master-0 kubenswrapper[29458]: I0308 22:13:54.925772 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-proxy-tls\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:54.943593 master-0 kubenswrapper[29458]: I0308 22:13:54.943505 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 22:13:54.944750 master-0 kubenswrapper[29458]: I0308 22:13:54.944672 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a7e92d4-b7ed-408b-b7cf-00209a627bea-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:54.946898 master-0 kubenswrapper[29458]: I0308 22:13:54.946853 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0269ed52-a753-49aa-9c38-c7aee23cebbd-metrics-client-ca\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:54.948340 master-0 kubenswrapper[29458]: I0308 22:13:54.948276 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c377685c-2024-4ef7-932d-5858eeb0d9bd-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:54.948441 master-0 kubenswrapper[29458]: I0308 22:13:54.948348 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-metrics-client-ca\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:54.949357 master-0 kubenswrapper[29458]: I0308 22:13:54.949319 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:54.962683 master-0 kubenswrapper[29458]: I0308 22:13:54.962617 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-jfbvc" Mar 08 22:13:54.980822 master-0 kubenswrapper[29458]: I0308 22:13:54.980753 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fnk6l" Mar 08 22:13:55.002292 master-0 kubenswrapper[29458]: I0308 22:13:55.002231 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 22:13:55.004580 master-0 kubenswrapper[29458]: I0308 22:13:55.004523 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:55.021342 master-0 kubenswrapper[29458]: I0308 22:13:55.021199 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 22:13:55.027609 master-0 kubenswrapper[29458]: I0308 22:13:55.027543 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:55.042675 master-0 kubenswrapper[29458]: I0308 22:13:55.042599 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 22:13:55.049883 master-0 kubenswrapper[29458]: I0308 22:13:55.049835 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d063b330-4180-43de-a248-c573183d96f1-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:55.061263 master-0 kubenswrapper[29458]: I0308 22:13:55.061216 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 22:13:55.080379 master-0 kubenswrapper[29458]: I0308 22:13:55.080312 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-2sq4s" Mar 08 22:13:55.100301 master-0 kubenswrapper[29458]: I0308 22:13:55.100204 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 22:13:55.108338 master-0 kubenswrapper[29458]: I0308 22:13:55.108279 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a7e92d4-b7ed-408b-b7cf-00209a627bea-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:55.120461 master-0 kubenswrapper[29458]: I0308 22:13:55.120407 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-wjqj5" Mar 08 22:13:55.140351 master-0 kubenswrapper[29458]: I0308 22:13:55.140287 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 22:13:55.144544 master-0 kubenswrapper[29458]: I0308 22:13:55.144508 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:55.161133 master-0 kubenswrapper[29458]: I0308 22:13:55.161038 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:13:55.181092 master-0 kubenswrapper[29458]: I0308 22:13:55.180988 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 22:13:55.189605 master-0 kubenswrapper[29458]: I0308 22:13:55.189536 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d063b330-4180-43de-a248-c573183d96f1-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:55.199930 master-0 kubenswrapper[29458]: I0308 22:13:55.199882 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ldwk8" Mar 08 22:13:55.220717 master-0 kubenswrapper[29458]: I0308 22:13:55.220616 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 22:13:55.225733 master-0 kubenswrapper[29458]: I0308 22:13:55.225668 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0269ed52-a753-49aa-9c38-c7aee23cebbd-node-exporter-tls\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:55.241091 master-0 kubenswrapper[29458]: I0308 22:13:55.240990 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-t7cwt" Mar 08 22:13:55.254516 master-0 kubenswrapper[29458]: E0308 22:13:55.254427 29458 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.254773 master-0 kubenswrapper[29458]: E0308 22:13:55.254592 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls podName:c377685c-2024-4ef7-932d-5858eeb0d9bd nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.254557072 +0000 UTC m=+5.542614704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-8rbn8" (UID: "c377685c-2024-4ef7-932d-5858eeb0d9bd") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.254773 master-0 kubenswrapper[29458]: E0308 22:13:55.254708 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.254773 master-0 kubenswrapper[29458]: E0308 22:13:55.254760 29458 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.254967 master-0 kubenswrapper[29458]: E0308 22:13:55.254869 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.2548254 +0000 UTC m=+5.542883022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.254967 master-0 kubenswrapper[29458]: E0308 22:13:55.254902 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.254887432 +0000 UTC m=+5.542945014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.255337 master-0 kubenswrapper[29458]: E0308 22:13:55.255292 29458 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.255430 master-0 kubenswrapper[29458]: E0308 22:13:55.255402 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.255375725 +0000 UTC m=+5.543433357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.256637 master-0 kubenswrapper[29458]: E0308 22:13:55.256575 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.256637 master-0 kubenswrapper[29458]: E0308 22:13:55.256603 29458 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.256815 master-0 kubenswrapper[29458]: E0308 22:13:55.256688 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.256666803 +0000 UTC m=+5.544724435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.256815 master-0 kubenswrapper[29458]: E0308 22:13:55.256717 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.256703654 +0000 UTC m=+5.544761276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.257890 master-0 kubenswrapper[29458]: E0308 22:13:55.257842 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.257999 master-0 kubenswrapper[29458]: E0308 22:13:55.257927 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.257906598 +0000 UTC m=+5.545964240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.257999 master-0 kubenswrapper[29458]: E0308 22:13:55.257960 29458 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.258159 master-0 kubenswrapper[29458]: E0308 22:13:55.257998 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.258159 master-0 kubenswrapper[29458]: E0308 22:13:55.258030 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.258005391 +0000 UTC m=+5.546063023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.258159 master-0 kubenswrapper[29458]: E0308 22:13:55.258062 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.258044812 +0000 UTC m=+5.546102444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.259202 master-0 kubenswrapper[29458]: E0308 22:13:55.259158 29458 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259202 master-0 kubenswrapper[29458]: E0308 22:13:55.259185 29458 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-dv1om8r64ct8c: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259386 master-0 kubenswrapper[29458]: E0308 22:13:55.259218 29458 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259386 master-0 kubenswrapper[29458]: E0308 22:13:55.259238 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.259217506 +0000 UTC m=+5.547275138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259386 master-0 kubenswrapper[29458]: E0308 22:13:55.259288 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.259274908 +0000 UTC m=+5.547332540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259386 master-0 kubenswrapper[29458]: E0308 22:13:55.259277 29458 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259679 master-0 kubenswrapper[29458]: E0308 22:13:55.259373 29458 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259679 master-0 kubenswrapper[29458]: E0308 22:13:55.259297 29458 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.259679 master-0 kubenswrapper[29458]: E0308 22:13:55.259313 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.259300518 +0000 UTC m=+5.547358140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259679 master-0 kubenswrapper[29458]: E0308 22:13:55.259646 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs podName:00db426a-15d4-4737-a85e-b4cf6362c759 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.259583196 +0000 UTC m=+5.547640978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs") pod "multus-admission-controller-7769569c45-9lhn8" (UID: "00db426a-15d4-4737-a85e-b4cf6362c759") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259956 master-0 kubenswrapper[29458]: E0308 22:13:55.259695 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config podName:ecb3134a-ff4f-4069-8817-010b400296f6 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.259670999 +0000 UTC m=+5.547728831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-7d9bcd6578-pxdzg" (UID: "ecb3134a-ff4f-4069-8817-010b400296f6") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259956 master-0 kubenswrapper[29458]: E0308 22:13:55.259734 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle podName:d589bfbb-3a7d-4617-9770-5c9ef737cb4a nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.25971693 +0000 UTC m=+5.547774752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle") pod "metrics-server-f5876b8d7-2222x" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a") : failed to sync configmap cache: timed out waiting for the condition Mar 08 22:13:55.259956 master-0 kubenswrapper[29458]: E0308 22:13:55.259176 29458 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.259956 master-0 kubenswrapper[29458]: E0308 22:13:55.259864 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls podName:c3af41e9-c604-48da-bec5-df81c2ef3374 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:56.259835504 +0000 UTC m=+5.547893326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-wznvc" (UID: "c3af41e9-c604-48da-bec5-df81c2ef3374") : failed to sync secret cache: timed out waiting for the condition Mar 08 22:13:55.261141 master-0 kubenswrapper[29458]: I0308 22:13:55.261010 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 22:13:55.281804 master-0 kubenswrapper[29458]: I0308 22:13:55.281661 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cfkxm" Mar 08 22:13:55.302200 master-0 kubenswrapper[29458]: I0308 22:13:55.302037 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 22:13:55.320969 master-0 kubenswrapper[29458]: I0308 22:13:55.320701 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 22:13:55.351749 master-0 kubenswrapper[29458]: I0308 22:13:55.351672 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 08 22:13:55.360760 master-0 kubenswrapper[29458]: I0308 22:13:55.360699 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-4jq4h" Mar 08 22:13:55.381063 master-0 kubenswrapper[29458]: I0308 22:13:55.380206 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 08 22:13:55.401615 master-0 kubenswrapper[29458]: I0308 22:13:55.401557 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 08 22:13:55.421044 master-0 kubenswrapper[29458]: I0308 22:13:55.420978 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 08 22:13:55.441387 master-0 kubenswrapper[29458]: I0308 22:13:55.441342 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 08 22:13:55.461732 master-0 kubenswrapper[29458]: I0308 22:13:55.461491 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 08 22:13:55.481220 master-0 kubenswrapper[29458]: I0308 22:13:55.481167 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 22:13:55.501045 master-0 kubenswrapper[29458]: I0308 22:13:55.500921 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 22:13:55.523050 master-0 kubenswrapper[29458]: I0308 22:13:55.522945 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-xhdwj" Mar 08 22:13:55.540579 master-0 kubenswrapper[29458]: I0308 22:13:55.540425 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dv1om8r64ct8c" Mar 08 22:13:55.558788 master-0 kubenswrapper[29458]: I0308 22:13:55.558723 29458 request.go:700] Waited for 3.010332957s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkubelet-serving-ca-bundle&limit=500&resourceVersion=0 Mar 08 22:13:55.562730 master-0 kubenswrapper[29458]: I0308 22:13:55.562660 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 22:13:55.582486 master-0 kubenswrapper[29458]: I0308 22:13:55.582380 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-xjqqb" Mar 08 22:13:55.601098 master-0 kubenswrapper[29458]: I0308 22:13:55.600729 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 22:13:55.620331 master-0 kubenswrapper[29458]: I0308 22:13:55.620243 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 22:13:55.642950 master-0 kubenswrapper[29458]: I0308 22:13:55.642885 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 22:13:55.660676 master-0 kubenswrapper[29458]: I0308 22:13:55.660607 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7bxvk" Mar 08 22:13:55.681006 master-0 kubenswrapper[29458]: I0308 22:13:55.680947 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 08 22:13:55.717902 master-0 kubenswrapper[29458]: I0308 22:13:55.717814 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngf2z\" (UniqueName: \"kubernetes.io/projected/d4d01185-e485-4697-92c2-31a044f25d82-kube-api-access-ngf2z\") pod \"openshift-controller-manager-operator-8565d84698-x8jg8\" (UID: \"d4d01185-e485-4697-92c2-31a044f25d82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-x8jg8" Mar 08 22:13:55.732822 master-0 kubenswrapper[29458]: I0308 22:13:55.732736 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l47w\" (UniqueName: \"kubernetes.io/projected/2851c096-f5cb-4a46-a5a0-ac0b1341033b-kube-api-access-2l47w\") pod \"cluster-node-tuning-operator-66c7586884-c4lpf\" (UID: \"2851c096-f5cb-4a46-a5a0-ac0b1341033b\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4lpf" Mar 08 22:13:55.764948 master-0 kubenswrapper[29458]: I0308 22:13:55.764869 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:55.787100 master-0 kubenswrapper[29458]: I0308 22:13:55.787019 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4xt\" (UniqueName: \"kubernetes.io/projected/44e67e41-045e-42ef-8f60-6ef15606d6a2-kube-api-access-zl4xt\") pod \"network-metrics-daemon-lqdbv\" (UID: \"44e67e41-045e-42ef-8f60-6ef15606d6a2\") " pod="openshift-multus/network-metrics-daemon-lqdbv" Mar 08 22:13:55.796287 master-0 kubenswrapper[29458]: I0308 22:13:55.796135 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj5rx\" (UniqueName: \"kubernetes.io/projected/89619d97-2c16-4e76-ba80-8b519f6a9366-kube-api-access-zj5rx\") pod \"community-operators-47cmq\" (UID: \"89619d97-2c16-4e76-ba80-8b519f6a9366\") " pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:13:55.814388 master-0 kubenswrapper[29458]: I0308 22:13:55.814322 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dr4p\" (UniqueName: \"kubernetes.io/projected/df48e7e0-0659-48e2-9b6a-32c964ff47b2-kube-api-access-4dr4p\") pod \"dns-operator-589895fbb7-wtvp5\" (UID: \"df48e7e0-0659-48e2-9b6a-32c964ff47b2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wtvp5" Mar 08 22:13:55.843738 master-0 kubenswrapper[29458]: I0308 22:13:55.843638 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmfqq\" (UniqueName: \"kubernetes.io/projected/c901b468-b8e9-48f8-8050-0d54e24e2adb-kube-api-access-hmfqq\") pod \"csi-snapshot-controller-7577d6f48-wklhr\" (UID: \"c901b468-b8e9-48f8-8050-0d54e24e2adb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" Mar 08 22:13:55.870051 master-0 kubenswrapper[29458]: I0308 22:13:55.869988 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrqj\" (UniqueName: \"kubernetes.io/projected/66e50eed-e3ac-431f-931b-7c4c848c491b-kube-api-access-bjrqj\") pod \"insights-operator-8f89dfddd-fn4ck\" (UID: \"66e50eed-e3ac-431f-931b-7c4c848c491b\") " pod="openshift-insights/insights-operator-8f89dfddd-fn4ck" Mar 08 22:13:55.902954 master-0 kubenswrapper[29458]: I0308 22:13:55.902894 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpxls\" (UniqueName: \"kubernetes.io/projected/081acedd-4c88-461f-80f3-e80fdbadb725-kube-api-access-cpxls\") pod \"ovnkube-control-plane-66b55d57d-ngrjm\" (UID: \"081acedd-4c88-461f-80f3-e80fdbadb725\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-ngrjm" Mar 08 22:13:55.906222 master-0 kubenswrapper[29458]: I0308 22:13:55.906180 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") pod \"route-controller-manager-86888d445f-7f74k\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:13:55.926432 master-0 kubenswrapper[29458]: I0308 22:13:55.926344 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5t9m\" (UniqueName: \"kubernetes.io/projected/088eecd9-a153-4fe0-af5a-78f5bdc0eb6b-kube-api-access-w5t9m\") pod \"redhat-operators-8w7wm\" (UID: \"088eecd9-a153-4fe0-af5a-78f5bdc0eb6b\") " pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:55.944659 master-0 kubenswrapper[29458]: I0308 22:13:55.944606 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kz92\" (UniqueName: \"kubernetes.io/projected/81f5ed55-225c-41e2-bc9d-b41063a604c9-kube-api-access-7kz92\") pod \"router-default-79f8cd6fdd-4fsdl\" (UID: \"81f5ed55-225c-41e2-bc9d-b41063a604c9\") " pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:55.956107 master-0 kubenswrapper[29458]: I0308 22:13:55.953480 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqqw\" (UniqueName: \"kubernetes.io/projected/a8e00c74-fb72-4e3d-a22c-c38a4772a813-kube-api-access-gwqqw\") pod \"openshift-apiserver-operator-799b6db4d7-nqz5k\" (UID: \"a8e00c74-fb72-4e3d-a22c-c38a4772a813\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-nqz5k" Mar 08 22:13:55.986268 master-0 kubenswrapper[29458]: I0308 22:13:55.986201 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xj9\" (UniqueName: \"kubernetes.io/projected/96a67acb-9cc6-4793-b99a-01479b239d76-kube-api-access-d9xj9\") pod \"multus-additional-cni-plugins-74fmb\" (UID: \"96a67acb-9cc6-4793-b99a-01479b239d76\") " pod="openshift-multus/multus-additional-cni-plugins-74fmb" Mar 08 22:13:55.992056 master-0 kubenswrapper[29458]: I0308 22:13:55.992004 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04fb7bdb-fb5a-4187-94a3-67c8f09684ed-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-mww2c\" (UID: \"04fb7bdb-fb5a-4187-94a3-67c8f09684ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-mww2c" Mar 08 22:13:56.011624 master-0 kubenswrapper[29458]: I0308 22:13:56.011566 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h4vv\" (UniqueName: \"kubernetes.io/projected/de89c423-0f2a-440f-9fa9-92fefea84b09-kube-api-access-7h4vv\") pod \"cluster-olm-operator-77899cf6d-mnf25\" (UID: \"de89c423-0f2a-440f-9fa9-92fefea84b09\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-mnf25" Mar 08 22:13:56.032530 master-0 kubenswrapper[29458]: I0308 22:13:56.032475 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcbb\" (UniqueName: \"kubernetes.io/projected/a21e2296-10cb-4c70-ac3e-2173d35faac4-kube-api-access-7xcbb\") pod \"network-operator-7c649bf6d4-znt8q\" (UID: \"a21e2296-10cb-4c70-ac3e-2173d35faac4\") " pod="openshift-network-operator/network-operator-7c649bf6d4-znt8q" Mar 08 22:13:56.052609 master-0 kubenswrapper[29458]: I0308 22:13:56.052487 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkp4\" (UniqueName: \"kubernetes.io/projected/2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8-kube-api-access-dqkp4\") pod \"cloud-credential-operator-55d85b7b47-mfqlz\" (UID: \"2f9399bc-ac2a-4eb3-b1a0-dd523e5a97c8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-mfqlz" Mar 08 22:13:56.072060 master-0 kubenswrapper[29458]: I0308 22:13:56.072015 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2lv\" (UniqueName: \"kubernetes.io/projected/669ef8c8-8a32-4ebd-acc4-e8b2b45286a0-kube-api-access-jb2lv\") pod \"node-resolver-qdc2p\" (UID: \"669ef8c8-8a32-4ebd-acc4-e8b2b45286a0\") " pod="openshift-dns/node-resolver-qdc2p" Mar 08 22:13:56.092625 master-0 kubenswrapper[29458]: I0308 22:13:56.092567 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7fx\" (UniqueName: \"kubernetes.io/projected/971ffa86-4d52-4dc3-ba28-03d116ec3494-kube-api-access-7z7fx\") pod \"kube-storage-version-migrator-operator-7f65c457f5-zk8sw\" (UID: \"971ffa86-4d52-4dc3-ba28-03d116ec3494\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-zk8sw" Mar 08 22:13:56.099367 master-0 kubenswrapper[29458]: I0308 22:13:56.099328 29458 scope.go:117] "RemoveContainer" containerID="2d5837857b12c31514737c752f1c881539906b79a846525445cc0f9995a692a4" Mar 08 22:13:56.113421 master-0 kubenswrapper[29458]: I0308 22:13:56.113365 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hstt\" (UniqueName: \"kubernetes.io/projected/4382d186-34e4-40af-9b92-bb17ddcaa23f-kube-api-access-2hstt\") pod \"etcd-operator-5884b9cd56-bh88w\" (UID: \"4382d186-34e4-40af-9b92-bb17ddcaa23f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bh88w" Mar 08 22:13:56.137339 master-0 kubenswrapper[29458]: I0308 22:13:56.137288 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp26r\" (UniqueName: \"kubernetes.io/projected/077643a2-ab2d-4f12-9abf-42a1c56d7aff-kube-api-access-mp26r\") pod \"operator-controller-controller-manager-6598bfb6c4-nk294\" (UID: \"077643a2-ab2d-4f12-9abf-42a1c56d7aff\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:13:56.156081 master-0 kubenswrapper[29458]: I0308 22:13:56.156043 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdfls\" (UniqueName: \"kubernetes.io/projected/c228b17c-fd7b-4273-ac03-eac5d4a3a4ad-kube-api-access-sdfls\") pod \"cluster-storage-operator-6fbfc8dc8f-p68k6\" (UID: \"c228b17c-fd7b-4273-ac03-eac5d4a3a4ad\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-p68k6" Mar 08 22:13:56.172775 master-0 kubenswrapper[29458]: I0308 22:13:56.172718 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjt52\" (UniqueName: \"kubernetes.io/projected/7e0267ba-5dd7-4e81-885f-95b27a7b42ea-kube-api-access-jjt52\") pod \"marketplace-operator-64bf9778cb-5ljhh\" (UID: \"7e0267ba-5dd7-4e81-885f-95b27a7b42ea\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:13:56.197745 master-0 kubenswrapper[29458]: I0308 22:13:56.197680 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv57k\" (UniqueName: \"kubernetes.io/projected/be431b74-1116-4b0f-8b25-bbb0408411b0-kube-api-access-tv57k\") pod \"package-server-manager-854648ff6d-x5zxr\" (UID: \"be431b74-1116-4b0f-8b25-bbb0408411b0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:13:56.224907 master-0 kubenswrapper[29458]: I0308 22:13:56.224830 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmk7\" (UniqueName: \"kubernetes.io/projected/d9fe466f-5a23-4f69-8a96-44bd5d6194f5-kube-api-access-nvmk7\") pod \"cluster-autoscaler-operator-69576476f7-dvgxg\" (UID: \"d9fe466f-5a23-4f69-8a96-44bd5d6194f5\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-dvgxg" Mar 08 22:13:56.234479 master-0 kubenswrapper[29458]: I0308 22:13:56.234439 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2dp\" (UniqueName: \"kubernetes.io/projected/0cb21214-292a-48ee-85e2-6b1e62f40cb4-kube-api-access-sg2dp\") pod \"dns-default-65ts8\" (UID: \"0cb21214-292a-48ee-85e2-6b1e62f40cb4\") " pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:56.254408 master-0 kubenswrapper[29458]: I0308 22:13:56.254365 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ht4t\" (UniqueName: \"kubernetes.io/projected/e8ef68b9-6f8d-4697-b269-91ee4e310752-kube-api-access-6ht4t\") pod \"service-ca-84bfdbbb7f-b8zkz\" (UID: \"e8ef68b9-6f8d-4697-b269-91ee4e310752\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-b8zkz" Mar 08 22:13:56.294781 master-0 kubenswrapper[29458]: I0308 22:13:56.294711 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdhp\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-kube-api-access-vwdhp\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:56.306690 master-0 kubenswrapper[29458]: I0308 22:13:56.306557 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:56.306690 master-0 kubenswrapper[29458]: I0308 22:13:56.306632 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:56.306690 master-0 kubenswrapper[29458]: I0308 22:13:56.306658 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.306690 master-0 kubenswrapper[29458]: I0308 22:13:56.306679 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.306948 master-0 kubenswrapper[29458]: I0308 22:13:56.306714 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.306948 master-0 kubenswrapper[29458]: I0308 22:13:56.306735 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.306948 master-0 kubenswrapper[29458]: I0308 22:13:56.306774 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:13:56.307092 master-0 kubenswrapper[29458]: I0308 22:13:56.306977 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:56.307092 master-0 kubenswrapper[29458]: I0308 22:13:56.306999 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.307092 master-0 kubenswrapper[29458]: I0308 22:13:56.307021 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.307217 master-0 kubenswrapper[29458]: I0308 22:13:56.307099 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.307217 master-0 kubenswrapper[29458]: I0308 22:13:56.307132 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.307217 master-0 kubenswrapper[29458]: I0308 22:13:56.307195 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.307327 master-0 kubenswrapper[29458]: I0308 22:13:56.307236 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.307327 master-0 kubenswrapper[29458]: I0308 22:13:56.307261 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.307327 master-0 kubenswrapper[29458]: I0308 22:13:56.307284 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:56.307622 master-0 kubenswrapper[29458]: I0308 22:13:56.307588 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:56.307809 master-0 kubenswrapper[29458]: I0308 22:13:56.307776 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:56.307986 master-0 kubenswrapper[29458]: I0308 22:13:56.307955 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:56.308225 master-0 kubenswrapper[29458]: I0308 22:13:56.308197 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.308398 master-0 kubenswrapper[29458]: I0308 22:13:56.308369 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.308523 master-0 kubenswrapper[29458]: I0308 22:13:56.308502 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.308688 master-0 kubenswrapper[29458]: I0308 22:13:56.308663 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.308912 master-0 kubenswrapper[29458]: I0308 22:13:56.308832 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00db426a-15d4-4737-a85e-b4cf6362c759-webhook-certs\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:13:56.309038 master-0 kubenswrapper[29458]: I0308 22:13:56.309013 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c377685c-2024-4ef7-932d-5858eeb0d9bd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:56.309252 master-0 kubenswrapper[29458]: I0308 22:13:56.309208 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-federate-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.309389 master-0 kubenswrapper[29458]: I0308 22:13:56.309363 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-serving-certs-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.309542 master-0 kubenswrapper[29458]: I0308 22:13:56.309520 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.309698 master-0 kubenswrapper[29458]: I0308 22:13:56.309677 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-client-tls\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.309935 master-0 kubenswrapper[29458]: I0308 22:13:56.309899 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb3134a-ff4f-4069-8817-010b400296f6-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.310054 master-0 kubenswrapper[29458]: I0308 22:13:56.310034 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ecb3134a-ff4f-4069-8817-010b400296f6-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:56.310284 master-0 kubenswrapper[29458]: I0308 22:13:56.310261 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.339741 master-0 kubenswrapper[29458]: I0308 22:13:56.339699 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvp5b\" (UniqueName: \"kubernetes.io/projected/a5afb146-31d7-4da9-8738-b6c15528233a-kube-api-access-mvp5b\") pod \"apiserver-6bf768964c-srxfg\" (UID: \"a5afb146-31d7-4da9-8738-b6c15528233a\") " pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:56.353034 master-0 kubenswrapper[29458]: I0308 22:13:56.352971 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlmx\" (UniqueName: \"kubernetes.io/projected/0d851f97-b21e-432e-a4c3-dc0a8ff00e84-kube-api-access-7tlmx\") pod \"service-ca-operator-69b6fc6b88-pkcp5\" (UID: \"0d851f97-b21e-432e-a4c3-dc0a8ff00e84\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-pkcp5" Mar 08 22:13:56.353306 master-0 kubenswrapper[29458]: I0308 22:13:56.353217 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpb8q\" (UniqueName: \"kubernetes.io/projected/ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a-kube-api-access-lpb8q\") pod \"apiserver-6f9445b8fd-w44n6\" (UID: \"ac6c9ea4-84d0-4159-8727-8eff9c7b4a7a\") " pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:56.354367 master-0 kubenswrapper[29458]: I0308 22:13:56.354329 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwq4\" (UniqueName: \"kubernetes.io/projected/83b5f0b6-adee-4820-8212-b4d182b178d2-kube-api-access-5pwq4\") pod \"catalog-operator-7d9c49f57b-6q5t2\" (UID: \"83b5f0b6-adee-4820-8212-b4d182b178d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 22:13:56.374998 master-0 kubenswrapper[29458]: I0308 22:13:56.374917 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") pod \"controller-manager-f7df5f5b-txsrq\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:13:56.394403 master-0 kubenswrapper[29458]: I0308 22:13:56.394338 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqnj\" (UniqueName: \"kubernetes.io/projected/1232f59f-4e6a-46ef-8bec-1bd4e04956ef-kube-api-access-pcqnj\") pod \"ovnkube-node-g4d2r\" (UID: \"1232f59f-4e6a-46ef-8bec-1bd4e04956ef\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:56.420704 master-0 kubenswrapper[29458]: I0308 22:13:56.420638 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gl4\" (UniqueName: \"kubernetes.io/projected/3f1a7900-a0b2-47fc-b43c-a0a5dee6b657-kube-api-access-96gl4\") pod \"authentication-operator-7c6989d6c4-8h8fx\" (UID: \"3f1a7900-a0b2-47fc-b43c-a0a5dee6b657\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-8h8fx" Mar 08 22:13:56.433058 master-0 kubenswrapper[29458]: I0308 22:13:56.433017 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9c64\" (UniqueName: \"kubernetes.io/projected/dfe625a1-5ba4-491f-9ab3-5d91154961a0-kube-api-access-j9c64\") pod \"network-node-identity-trhtl\" (UID: \"dfe625a1-5ba4-491f-9ab3-5d91154961a0\") " pod="openshift-network-node-identity/network-node-identity-trhtl" Mar 08 22:13:56.453510 master-0 kubenswrapper[29458]: I0308 22:13:56.453442 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpt7\" (UniqueName: \"kubernetes.io/projected/0d0feb73-2ef6-4083-81ce-82a1394ce9c4-kube-api-access-jfpt7\") pod \"migrator-57ccdf9b5-bf6ws\" (UID: \"0d0feb73-2ef6-4083-81ce-82a1394ce9c4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-bf6ws" Mar 08 22:13:56.473019 master-0 kubenswrapper[29458]: I0308 22:13:56.472943 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlqz\" (UniqueName: \"kubernetes.io/projected/6eb502a1-db10-46ba-b698-461919464fb9-kube-api-access-sjlqz\") pod \"control-plane-machine-set-operator-6686554ddc-c246n\" (UID: \"6eb502a1-db10-46ba-b698-461919464fb9\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-c246n" Mar 08 22:13:56.490645 master-0 kubenswrapper[29458]: I0308 22:13:56.490568 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqrj\" (UniqueName: \"kubernetes.io/projected/d9e9c931-9595-42f1-bbc2-c412286f6cd1-kube-api-access-znqrj\") pod \"cluster-baremetal-operator-5cdb4c5598-xwmmm\" (UID: \"d9e9c931-9595-42f1-bbc2-c412286f6cd1\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-xwmmm" Mar 08 22:13:56.523919 master-0 kubenswrapper[29458]: I0308 22:13:56.523826 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxg7t\" (UniqueName: \"kubernetes.io/projected/385e69e4-d443-44bb-8ee4-578a1c902c62-kube-api-access-vxg7t\") pod \"multus-l8ltx\" (UID: \"385e69e4-d443-44bb-8ee4-578a1c902c62\") " pod="openshift-multus/multus-l8ltx" Mar 08 22:13:56.537191 master-0 kubenswrapper[29458]: I0308 22:13:56.536788 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlpq\" (UniqueName: \"kubernetes.io/projected/4ef806a4-5486-43a9-8bfa-b1670c888dc1-kube-api-access-qzlpq\") pod \"cluster-monitoring-operator-674cbfbd9d-mt484\" (UID: \"4ef806a4-5486-43a9-8bfa-b1670c888dc1\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-mt484" Mar 08 22:13:56.541183 master-0 kubenswrapper[29458]: I0308 22:13:56.541140 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-wklhr_c901b468-b8e9-48f8-8050-0d54e24e2adb/snapshot-controller/4.log" Mar 08 22:13:56.552766 master-0 kubenswrapper[29458]: I0308 22:13:56.552691 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/d0641333-feda-44c5-baf5-ceee4ce3fd8f-kube-api-access-784c7\") pod \"openshift-config-operator-64488f9d78-krpfs\" (UID: \"d0641333-feda-44c5-baf5-ceee4ce3fd8f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:56.559414 master-0 kubenswrapper[29458]: I0308 22:13:56.559301 29458 request.go:700] Waited for 3.964111811s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Mar 08 22:13:56.585354 master-0 kubenswrapper[29458]: I0308 22:13:56.585267 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcp8\" (UniqueName: \"kubernetes.io/projected/a913c639-ebfc-42a3-85cd-8a460027d3ec-kube-api-access-drcp8\") pod \"cluster-image-registry-operator-86d6d77c7c-g2ddr\" (UID: \"a913c639-ebfc-42a3-85cd-8a460027d3ec\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-g2ddr" Mar 08 22:13:56.594595 master-0 kubenswrapper[29458]: I0308 22:13:56.594539 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwf9\" (UniqueName: \"kubernetes.io/projected/f3fbcd83-a3e1-4de1-aceb-2692d348e495-kube-api-access-5jwf9\") pod \"tuned-rxbl5\" (UID: \"f3fbcd83-a3e1-4de1-aceb-2692d348e495\") " pod="openshift-cluster-node-tuning-operator/tuned-rxbl5" Mar 08 22:13:56.611964 master-0 kubenswrapper[29458]: I0308 22:13:56.611807 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed-bound-sa-token\") pod \"ingress-operator-677db989d6-cjdgr\" (UID: \"84bb22b5-b954-4fa2-b6c0-2f32a8cd7bed\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-cjdgr" Mar 08 22:13:56.632910 master-0 kubenswrapper[29458]: I0308 22:13:56.632824 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b849f992-1020-4633-98be-75705b962fa9-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-8pqc2\" (UID: \"b849f992-1020-4633-98be-75705b962fa9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-8pqc2" Mar 08 22:13:56.652035 master-0 kubenswrapper[29458]: I0308 22:13:56.651965 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxssr\" (UniqueName: \"kubernetes.io/projected/fd9abe2b-f829-4376-9abe-7da0a08770e7-kube-api-access-vxssr\") pod \"cluster-samples-operator-664cb58b85-mkvtk\" (UID: \"fd9abe2b-f829-4376-9abe-7da0a08770e7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mkvtk" Mar 08 22:13:56.675852 master-0 kubenswrapper[29458]: I0308 22:13:56.675785 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdmr\" (UniqueName: \"kubernetes.io/projected/b358dcb7-d01f-4206-b636-b55a599a73bd-kube-api-access-bmdmr\") pod \"iptables-alerter-pwn9k\" (UID: \"b358dcb7-d01f-4206-b636-b55a599a73bd\") " pod="openshift-network-operator/iptables-alerter-pwn9k" Mar 08 22:13:56.695605 master-0 kubenswrapper[29458]: I0308 22:13:56.695544 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5xq4\" (UniqueName: \"kubernetes.io/projected/f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e-kube-api-access-l5xq4\") pod \"network-check-target-djlff\" (UID: \"f1b63e59-0f09-4bc2-b1e7-a9a9ba97b53e\") " pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 22:13:56.717703 master-0 kubenswrapper[29458]: I0308 22:13:56.717616 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-ln9l2\" (UID: \"f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-ln9l2" Mar 08 22:13:56.732280 master-0 kubenswrapper[29458]: I0308 22:13:56.732234 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ht7\" (UniqueName: \"kubernetes.io/projected/37bf82cb-adea-46d3-a899-136eb1d1f292-kube-api-access-v6ht7\") pod \"csi-snapshot-controller-operator-5685fbc7d-nl9qg\" (UID: \"37bf82cb-adea-46d3-a899-136eb1d1f292\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-nl9qg" Mar 08 22:13:56.755522 master-0 kubenswrapper[29458]: I0308 22:13:56.755467 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftn6p\" (UniqueName: \"kubernetes.io/projected/2a91f36f-900e-4b99-9be1-dfc61d8e31d9-kube-api-access-ftn6p\") pod \"catalogd-controller-manager-7f8b8b6f4c-qv4bv\" (UID: \"2a91f36f-900e-4b99-9be1-dfc61d8e31d9\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:13:56.800120 master-0 kubenswrapper[29458]: I0308 22:13:56.800039 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjndf\" (UniqueName: \"kubernetes.io/projected/10e2e81b-cd18-4e30-b8ad-4cf105daea4a-kube-api-access-sjndf\") pod \"network-check-source-7c67b67d47-qf2dp\" (UID: \"10e2e81b-cd18-4e30-b8ad-4cf105daea4a\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-qf2dp" Mar 08 22:13:56.802686 master-0 kubenswrapper[29458]: I0308 22:13:56.802422 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6pm\" (UniqueName: \"kubernetes.io/projected/3c50dd1f-fcbc-412c-a1cc-0738ea4464e0-kube-api-access-ff6pm\") pod \"olm-operator-d64cfc9db-xqh7x\" (UID: \"3c50dd1f-fcbc-412c-a1cc-0738ea4464e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 22:13:56.821396 master-0 kubenswrapper[29458]: I0308 22:13:56.821349 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fbc12f-3c27-4a7a-933f-43a55c960335-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-2mspg\" (UID: \"f6fbc12f-3c27-4a7a-933f-43a55c960335\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-2mspg" Mar 08 22:13:56.844215 master-0 kubenswrapper[29458]: I0308 22:13:56.844171 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2lsl\" (UniqueName: \"kubernetes.io/projected/b1207b6b-0517-46eb-9953-737f2bf1040d-kube-api-access-d2lsl\") pod \"certified-operators-8ctpt\" (UID: \"b1207b6b-0517-46eb-9953-737f2bf1040d\") " pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:56.857593 master-0 kubenswrapper[29458]: I0308 22:13:56.857546 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52wj\" (UniqueName: \"kubernetes.io/projected/b6bc6f78-2c5c-4add-925f-f6568a49c2cc-kube-api-access-c52wj\") pod \"machine-config-controller-ff46b7bdf-zn77m\" (UID: \"b6bc6f78-2c5c-4add-925f-f6568a49c2cc\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-zn77m" Mar 08 22:13:56.874324 master-0 kubenswrapper[29458]: I0308 22:13:56.874240 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfdv\" (UniqueName: \"kubernetes.io/projected/1ef14467-bb62-462d-9dec-dee43e4cc9bd-kube-api-access-6tfdv\") pod \"machine-api-operator-84bf6db4f9-64gfj\" (UID: \"1ef14467-bb62-462d-9dec-dee43e4cc9bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-64gfj" Mar 08 22:13:56.892564 master-0 kubenswrapper[29458]: I0308 22:13:56.892505 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fp4g\" (UniqueName: \"kubernetes.io/projected/0269ed52-a753-49aa-9c38-c7aee23cebbd-kube-api-access-8fp4g\") pod \"node-exporter-l8k5g\" (UID: \"0269ed52-a753-49aa-9c38-c7aee23cebbd\") " pod="openshift-monitoring/node-exporter-l8k5g" Mar 08 22:13:56.914032 master-0 kubenswrapper[29458]: I0308 22:13:56.913950 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") pod \"metrics-server-f5876b8d7-2222x\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:13:56.932260 master-0 kubenswrapper[29458]: I0308 22:13:56.932195 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdtk\" (UniqueName: \"kubernetes.io/projected/7868a4fb-af89-4bdc-b41b-31f4ee59b5f3-kube-api-access-shdtk\") pod \"machine-config-daemon-q669r\" (UID: \"7868a4fb-af89-4bdc-b41b-31f4ee59b5f3\") " pod="openshift-machine-config-operator/machine-config-daemon-q669r" Mar 08 22:13:56.952394 master-0 kubenswrapper[29458]: I0308 22:13:56.952294 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxvr\" (UniqueName: \"kubernetes.io/projected/4cbc6c17-7c16-435f-9399-b6f1ddb6d17f-kube-api-access-gxxvr\") pod \"machine-approver-754bdc9f9d-stxvg\" (UID: \"4cbc6c17-7c16-435f-9399-b6f1ddb6d17f\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-stxvg" Mar 08 22:13:56.977396 master-0 kubenswrapper[29458]: I0308 22:13:56.977342 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp8w\" (UniqueName: \"kubernetes.io/projected/4e2eb05c-eaa5-4d9b-abad-c0ef6835087e-kube-api-access-lhp8w\") pod \"packageserver-f988cd549-68kmh\" (UID: \"4e2eb05c-eaa5-4d9b-abad-c0ef6835087e\") " pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:56.997121 master-0 kubenswrapper[29458]: I0308 22:13:56.997027 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4s4\" (UniqueName: \"kubernetes.io/projected/c377685c-2024-4ef7-932d-5858eeb0d9bd-kube-api-access-4z4s4\") pod \"openshift-state-metrics-74cc79fd76-8rbn8\" (UID: \"c377685c-2024-4ef7-932d-5858eeb0d9bd\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-8rbn8" Mar 08 22:13:57.015302 master-0 kubenswrapper[29458]: I0308 22:13:57.015242 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp86m\" (UniqueName: \"kubernetes.io/projected/3e38e989-41b8-4c80-99fb-8d414dda5da1-kube-api-access-jp86m\") pod \"machine-config-operator-fdb5c78b5-m7phf\" (UID: \"3e38e989-41b8-4c80-99fb-8d414dda5da1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-m7phf" Mar 08 22:13:57.044943 master-0 kubenswrapper[29458]: I0308 22:13:57.044865 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67bc\" (UniqueName: \"kubernetes.io/projected/4eec590b-c536-4b16-a664-81bc3c74eef5-kube-api-access-k67bc\") pod \"redhat-marketplace-mg95b\" (UID: \"4eec590b-c536-4b16-a664-81bc3c74eef5\") " pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:13:57.053219 master-0 kubenswrapper[29458]: I0308 22:13:57.053156 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v2k8\" (UniqueName: \"kubernetes.io/projected/d063b330-4180-43de-a248-c573183d96f1-kube-api-access-8v2k8\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh\" (UID: \"d063b330-4180-43de-a248-c573183d96f1\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t2sgh" Mar 08 22:13:57.074661 master-0 kubenswrapper[29458]: I0308 22:13:57.074486 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7xb\" (UniqueName: \"kubernetes.io/projected/4b5246dc-b715-4678-a3a9-878df57dd236-kube-api-access-hq7xb\") pod \"machine-config-server-svxwz\" (UID: \"4b5246dc-b715-4678-a3a9-878df57dd236\") " pod="openshift-machine-config-operator/machine-config-server-svxwz" Mar 08 22:13:57.093276 master-0 kubenswrapper[29458]: I0308 22:13:57.093197 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdz7m\" (UniqueName: \"kubernetes.io/projected/8a7e92d4-b7ed-408b-b7cf-00209a627bea-kube-api-access-qdz7m\") pod \"prometheus-operator-5ff8674d55-jd2m9\" (UID: \"8a7e92d4-b7ed-408b-b7cf-00209a627bea\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-jd2m9" Mar 08 22:13:57.125303 master-0 kubenswrapper[29458]: I0308 22:13:57.125227 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mrp\" (UniqueName: \"kubernetes.io/projected/00db426a-15d4-4737-a85e-b4cf6362c759-kube-api-access-86mrp\") pod \"multus-admission-controller-7769569c45-9lhn8\" (UID: \"00db426a-15d4-4737-a85e-b4cf6362c759\") " pod="openshift-multus/multus-admission-controller-7769569c45-9lhn8" Mar 08 22:13:57.151574 master-0 kubenswrapper[29458]: I0308 22:13:57.151486 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:57.154379 master-0 kubenswrapper[29458]: I0308 22:13:57.154299 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2ch\" (UniqueName: \"kubernetes.io/projected/ecb3134a-ff4f-4069-8817-010b400296f6-kube-api-access-pq2ch\") pod \"telemeter-client-7d9bcd6578-pxdzg\" (UID: \"ecb3134a-ff4f-4069-8817-010b400296f6\") " pod="openshift-monitoring/telemeter-client-7d9bcd6578-pxdzg" Mar 08 22:13:57.197156 master-0 kubenswrapper[29458]: I0308 22:13:57.189116 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2nfk\" (UniqueName: \"kubernetes.io/projected/c3af41e9-c604-48da-bec5-df81c2ef3374-kube-api-access-z2nfk\") pod \"kube-state-metrics-68b88f8cb5-wznvc\" (UID: \"c3af41e9-c604-48da-bec5-df81c2ef3374\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-wznvc" Mar 08 22:13:57.199016 master-0 kubenswrapper[29458]: E0308 22:13:57.198937 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:57.199016 master-0 kubenswrapper[29458]: E0308 22:13:57.199013 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:57.199265 master-0 kubenswrapper[29458]: E0308 22:13:57.199162 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:57.699123013 +0000 UTC m=+6.987180635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:57.218569 master-0 kubenswrapper[29458]: E0308 22:13:57.217963 29458 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.246s" Mar 08 22:13:57.218569 master-0 kubenswrapper[29458]: I0308 22:13:57.218036 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"345ca27a-f572-4efa-b0ce-dfa8243becd6","Type":"ContainerDied","Data":"e63666c422a16c752beb8b0b06fe877b0b08af534810c31f0c885141cf9254a6"} Mar 08 22:13:57.231013 master-0 kubenswrapper[29458]: I0308 22:13:57.230926 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 08 22:13:57.264345 master-0 kubenswrapper[29458]: I0308 22:13:57.264251 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:57.264345 master-0 kubenswrapper[29458]: I0308 22:13:57.264334 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-wklhr" event={"ID":"c901b468-b8e9-48f8-8050-0d54e24e2adb","Type":"ContainerStarted","Data":"d20c467c532d6b4944bb3751246dbd2f5cc56d27a59ada4016759348e9ca76a9"} Mar 08 22:13:57.264705 master-0 kubenswrapper[29458]: I0308 22:13:57.264378 29458 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 22:13:57.264705 master-0 kubenswrapper[29458]: I0308 22:13:57.264451 29458 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 22:13:57.264835 master-0 kubenswrapper[29458]: E0308 22:13:57.264807 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 22:13:57.264887 master-0 kubenswrapper[29458]: I0308 22:13:57.264838 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 22:13:57.264887 master-0 kubenswrapper[29458]: E0308 22:13:57.264854 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerName="installer" Mar 08 22:13:57.264887 master-0 kubenswrapper[29458]: I0308 22:13:57.264868 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerName="installer" Mar 08 22:13:57.264992 master-0 kubenswrapper[29458]: E0308 22:13:57.264884 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerName="installer" Mar 08 22:13:57.264992 master-0 kubenswrapper[29458]: I0308 22:13:57.264900 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerName="installer" Mar 08 22:13:57.264992 master-0 kubenswrapper[29458]: E0308 22:13:57.264931 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerName="installer" Mar 08 22:13:57.264992 master-0 kubenswrapper[29458]: I0308 22:13:57.264944 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerName="installer" Mar 08 22:13:57.264992 master-0 kubenswrapper[29458]: E0308 22:13:57.264972 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerName="installer" Mar 08 22:13:57.264992 master-0 kubenswrapper[29458]: I0308 22:13:57.264985 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265001 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265015 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265040 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265055 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265102 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265115 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265137 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c633355a-b323-4458-8ecb-1e490d115f59" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265151 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="c633355a-b323-4458-8ecb-1e490d115f59" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265174 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d188983-1f19-4c8e-b604-034bd6308139" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265188 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d188983-1f19-4c8e-b604-034bd6308139" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265208 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265222 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: E0308 22:13:57.265238 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerName="installer" Mar 08 22:13:57.265355 master-0 kubenswrapper[29458]: I0308 22:13:57.265251 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265354 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://e3a61e0f18998d1659f1848d9ff8c4de1817df1723214bfa069260c375e7739f" gracePeriod=30 Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265490 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a43561f-bdde-456b-b4a4-2055d4fe6880" containerName="assisted-installer-controller" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265525 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="147f99a4-5e8d-4d76-9c13-3ec3ef6d04a0" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265558 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d188983-1f19-4c8e-b604-034bd6308139" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265591 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a90a446-01fc-4032-9d02-d82e25084ea9" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265612 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e851e2-74fc-4f4c-b907-3c9158c59cd4" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265644 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9a1ffa-fdef-4201-81a9-35b944f8c193" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265671 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="c633355a-b323-4458-8ecb-1e490d115f59" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265698 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="78dc543f-66ed-4098-b5a9-699ec2ccc856" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265720 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265738 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265753 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0b93ec-6ea0-4704-9449-57781a482ce4" containerName="installer" Mar 08 22:13:57.265851 master-0 kubenswrapper[29458]: I0308 22:13:57.265774 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a34dbc-eb6d-44f5-b1aa-4762b69382ed" containerName="installer" Mar 08 22:13:57.267813 master-0 kubenswrapper[29458]: I0308 22:13:57.267754 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 22:13:57.267813 master-0 kubenswrapper[29458]: I0308 22:13:57.267795 29458 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="a03c6eb7-9dc9-47bf-aa52-db1596d56137" Mar 08 22:13:57.267942 master-0 kubenswrapper[29458]: I0308 22:13:57.267830 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 08 22:13:57.267942 master-0 kubenswrapper[29458]: I0308 22:13:57.267856 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:57.268017 master-0 kubenswrapper[29458]: I0308 22:13:57.267990 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:57.268170 master-0 kubenswrapper[29458]: I0308 22:13:57.268144 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 08 22:13:57.268241 master-0 kubenswrapper[29458]: I0308 22:13:57.268220 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.268375 master-0 kubenswrapper[29458]: I0308 22:13:57.268352 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 08 22:13:57.269534 master-0 kubenswrapper[29458]: I0308 22:13:57.269508 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:57.269635 master-0 kubenswrapper[29458]: I0308 22:13:57.269619 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 22:13:57.269732 master-0 kubenswrapper[29458]: I0308 22:13:57.269714 29458 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="a03c6eb7-9dc9-47bf-aa52-db1596d56137" Mar 08 22:13:57.269814 master-0 kubenswrapper[29458]: I0308 22:13:57.269800 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:13:57.269944 master-0 kubenswrapper[29458]: I0308 22:13:57.269928 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:57.270147 master-0 kubenswrapper[29458]: I0308 22:13:57.270129 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:57.270261 master-0 kubenswrapper[29458]: I0308 22:13:57.270246 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:57.270342 master-0 kubenswrapper[29458]: I0308 22:13:57.270329 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kx9pl" Mar 08 22:13:57.270421 master-0 kubenswrapper[29458]: I0308 22:13:57.270408 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:57.270514 master-0 kubenswrapper[29458]: I0308 22:13:57.270501 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:57.270614 master-0 kubenswrapper[29458]: I0308 22:13:57.270600 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:57.270702 master-0 kubenswrapper[29458]: I0308 22:13:57.270689 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:57.270827 master-0 kubenswrapper[29458]: I0308 22:13:57.270812 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:13:57.270936 master-0 kubenswrapper[29458]: I0308 22:13:57.270921 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:13:57.271032 master-0 kubenswrapper[29458]: I0308 22:13:57.271018 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:57.271164 master-0 kubenswrapper[29458]: I0308 22:13:57.271148 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-65ts8" Mar 08 22:13:57.271248 master-0 kubenswrapper[29458]: I0308 22:13:57.271234 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:57.271353 master-0 kubenswrapper[29458]: I0308 22:13:57.271339 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:13:57.271479 master-0 kubenswrapper[29458]: I0308 22:13:57.271463 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:57.271590 master-0 kubenswrapper[29458]: I0308 22:13:57.271566 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-krpfs" Mar 08 22:13:57.271675 master-0 kubenswrapper[29458]: I0308 22:13:57.271657 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:13:57.271752 master-0 kubenswrapper[29458]: I0308 22:13:57.271739 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:57.273151 master-0 kubenswrapper[29458]: I0308 22:13:57.273061 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:57.273928 master-0 kubenswrapper[29458]: I0308 22:13:57.273909 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:57.278140 master-0 kubenswrapper[29458]: I0308 22:13:57.278122 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:13:57.283210 master-0 kubenswrapper[29458]: I0308 22:13:57.283157 29458 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 08 22:13:57.283283 master-0 kubenswrapper[29458]: I0308 22:13:57.283262 29458 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 08 22:13:57.331233 master-0 kubenswrapper[29458]: I0308 22:13:57.331108 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.331363 master-0 kubenswrapper[29458]: I0308 22:13:57.331312 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.334620 master-0 kubenswrapper[29458]: I0308 22:13:57.334570 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 22:13:57.433198 master-0 kubenswrapper[29458]: I0308 22:13:57.432409 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.433198 master-0 kubenswrapper[29458]: I0308 22:13:57.432493 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.433198 master-0 kubenswrapper[29458]: I0308 22:13:57.432496 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.433198 master-0 kubenswrapper[29458]: I0308 22:13:57.432575 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.434588 master-0 kubenswrapper[29458]: I0308 22:13:57.434216 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:57.533588 master-0 kubenswrapper[29458]: I0308 22:13:57.533526 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 08 22:13:57.533926 master-0 kubenswrapper[29458]: I0308 22:13:57.533657 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:57.533926 master-0 kubenswrapper[29458]: I0308 22:13:57.533692 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 08 22:13:57.533926 master-0 kubenswrapper[29458]: I0308 22:13:57.533788 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:57.534152 master-0 kubenswrapper[29458]: I0308 22:13:57.534116 29458 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:57.534152 master-0 kubenswrapper[29458]: I0308 22:13:57.534138 29458 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:57.550833 master-0 kubenswrapper[29458]: I0308 22:13:57.550787 29458 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="e3a61e0f18998d1659f1848d9ff8c4de1817df1723214bfa069260c375e7739f" exitCode=0 Mar 08 22:13:57.550995 master-0 kubenswrapper[29458]: I0308 22:13:57.550881 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 22:13:57.550995 master-0 kubenswrapper[29458]: I0308 22:13:57.550913 29458 scope.go:117] "RemoveContainer" containerID="b6b246bb81907eac732c126403c542413078697b3a057b896aee540f8c7e39d9" Mar 08 22:13:57.550995 master-0 kubenswrapper[29458]: I0308 22:13:57.550901 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cccef451dfdb5aa39f76464f49bfb358a48c497dc23473415516a86865fc62f" Mar 08 22:13:57.590941 master-0 kubenswrapper[29458]: I0308 22:13:57.590878 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:57.591238 master-0 kubenswrapper[29458]: I0308 22:13:57.591208 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:57.591310 master-0 kubenswrapper[29458]: I0308 22:13:57.591240 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:57.591310 master-0 kubenswrapper[29458]: I0308 22:13:57.591255 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:57.634156 master-0 kubenswrapper[29458]: I0308 22:13:57.634066 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:13:57.650984 master-0 kubenswrapper[29458]: I0308 22:13:57.650470 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:13:57.743331 master-0 kubenswrapper[29458]: I0308 22:13:57.743176 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:57.743690 master-0 kubenswrapper[29458]: E0308 22:13:57.743473 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:57.743690 master-0 kubenswrapper[29458]: E0308 22:13:57.743527 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:57.743690 master-0 kubenswrapper[29458]: E0308 22:13:57.743601 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:13:58.743581784 +0000 UTC m=+8.031639386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:57.798174 master-0 kubenswrapper[29458]: I0308 22:13:57.793158 29458 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="1afc5fac-19ba-494c-9b2b-7ababa7d5e73" Mar 08 22:13:58.563377 master-0 kubenswrapper[29458]: I0308 22:13:58.563297 29458 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="12c6674aa42bd8b0eb26aa3884a380f89f6c542c1fd2b1107da5615794a626eb" exitCode=0 Mar 08 22:13:58.564343 master-0 kubenswrapper[29458]: I0308 22:13:58.563405 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"12c6674aa42bd8b0eb26aa3884a380f89f6c542c1fd2b1107da5615794a626eb"} Mar 08 22:13:58.564343 master-0 kubenswrapper[29458]: I0308 22:13:58.563449 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"fa8b0e2a1f33594e86d91161fadbdf5445d6131e7074aa33de79e46a39e1e51d"} Mar 08 22:13:58.567553 master-0 kubenswrapper[29458]: I0308 22:13:58.567485 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:58.567553 master-0 kubenswrapper[29458]: I0308 22:13:58.567529 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:58.569364 master-0 kubenswrapper[29458]: I0308 22:13:58.569298 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:58.766751 master-0 kubenswrapper[29458]: I0308 22:13:58.766625 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:13:58.767129 master-0 kubenswrapper[29458]: E0308 22:13:58.767004 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:58.767129 master-0 kubenswrapper[29458]: E0308 22:13:58.767101 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:58.767308 master-0 kubenswrapper[29458]: E0308 22:13:58.767198 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:14:00.767168656 +0000 UTC m=+10.055226268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:13:58.985312 master-0 kubenswrapper[29458]: I0308 22:13:58.984846 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 08 22:13:58.985312 master-0 kubenswrapper[29458]: I0308 22:13:58.985210 29458 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 08 22:13:59.009353 master-0 kubenswrapper[29458]: I0308 22:13:59.009262 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:59.029346 master-0 kubenswrapper[29458]: I0308 22:13:59.029267 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:13:59.029562 master-0 kubenswrapper[29458]: I0308 22:13:59.029360 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 22:13:59.029562 master-0 kubenswrapper[29458]: I0308 22:13:59.029378 29458 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="1afc5fac-19ba-494c-9b2b-7ababa7d5e73" Mar 08 22:13:59.033247 master-0 kubenswrapper[29458]: I0308 22:13:59.033204 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 22:13:59.033304 master-0 kubenswrapper[29458]: I0308 22:13:59.033253 29458 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="1afc5fac-19ba-494c-9b2b-7ababa7d5e73" Mar 08 22:13:59.073911 master-0 kubenswrapper[29458]: I0308 22:13:59.070385 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") pod \"345ca27a-f572-4efa-b0ce-dfa8243becd6\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " Mar 08 22:13:59.073911 master-0 kubenswrapper[29458]: I0308 22:13:59.070481 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") pod \"345ca27a-f572-4efa-b0ce-dfa8243becd6\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " Mar 08 22:13:59.073911 master-0 kubenswrapper[29458]: I0308 22:13:59.070550 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") pod \"345ca27a-f572-4efa-b0ce-dfa8243becd6\" (UID: \"345ca27a-f572-4efa-b0ce-dfa8243becd6\") " Mar 08 22:13:59.073911 master-0 kubenswrapper[29458]: I0308 22:13:59.070551 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "345ca27a-f572-4efa-b0ce-dfa8243becd6" (UID: "345ca27a-f572-4efa-b0ce-dfa8243becd6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:59.073911 master-0 kubenswrapper[29458]: I0308 22:13:59.071117 29458 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:59.073911 master-0 kubenswrapper[29458]: I0308 22:13:59.071996 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock" (OuterVolumeSpecName: "var-lock") pod "345ca27a-f572-4efa-b0ce-dfa8243becd6" (UID: "345ca27a-f572-4efa-b0ce-dfa8243becd6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:13:59.080497 master-0 kubenswrapper[29458]: I0308 22:13:59.079645 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "345ca27a-f572-4efa-b0ce-dfa8243becd6" (UID: "345ca27a-f572-4efa-b0ce-dfa8243becd6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:13:59.172241 master-0 kubenswrapper[29458]: I0308 22:13:59.172160 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/345ca27a-f572-4efa-b0ce-dfa8243becd6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:59.172241 master-0 kubenswrapper[29458]: I0308 22:13:59.172211 29458 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/345ca27a-f572-4efa-b0ce-dfa8243becd6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:13:59.498785 master-0 kubenswrapper[29458]: I0308 22:13:59.498733 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75c47c75-b4x7d"] Mar 08 22:13:59.499238 master-0 kubenswrapper[29458]: E0308 22:13:59.499217 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345ca27a-f572-4efa-b0ce-dfa8243becd6" containerName="installer" Mar 08 22:13:59.499283 master-0 kubenswrapper[29458]: I0308 22:13:59.499239 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="345ca27a-f572-4efa-b0ce-dfa8243becd6" containerName="installer" Mar 08 22:13:59.500138 master-0 kubenswrapper[29458]: I0308 22:13:59.499412 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="345ca27a-f572-4efa-b0ce-dfa8243becd6" containerName="installer" Mar 08 22:13:59.502751 master-0 kubenswrapper[29458]: I0308 22:13:59.500576 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.523833 master-0 kubenswrapper[29458]: I0308 22:13:59.521749 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 22:13:59.529884 master-0 kubenswrapper[29458]: I0308 22:13:59.529821 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75c47c75-b4x7d"] Mar 08 22:13:59.543907 master-0 kubenswrapper[29458]: I0308 22:13:59.543861 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 22:13:59.556750 master-0 kubenswrapper[29458]: I0308 22:13:59.555858 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:59.561588 master-0 kubenswrapper[29458]: I0308 22:13:59.561522 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:13:59.579013 master-0 kubenswrapper[29458]: I0308 22:13:59.578937 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 22:13:59.581177 master-0 kubenswrapper[29458]: I0308 22:13:59.581121 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"345ca27a-f572-4efa-b0ce-dfa8243becd6","Type":"ContainerDied","Data":"5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002"} Mar 08 22:13:59.581262 master-0 kubenswrapper[29458]: I0308 22:13:59.581183 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e269b66a082f29b4e767340ca080090685cc35deec8a2ff5b8dffcb5ef07002" Mar 08 22:13:59.581349 master-0 kubenswrapper[29458]: I0308 22:13:59.581235 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 22:13:59.582343 master-0 kubenswrapper[29458]: I0308 22:13:59.582317 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 22:13:59.583148 master-0 kubenswrapper[29458]: I0308 22:13:59.582690 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583299 master-0 kubenswrapper[29458]: I0308 22:13:59.583179 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgc2l\" (UniqueName: \"kubernetes.io/projected/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-kube-api-access-vgc2l\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583299 master-0 kubenswrapper[29458]: I0308 22:13:59.583210 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-error\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583299 master-0 kubenswrapper[29458]: I0308 22:13:59.583241 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583299 master-0 kubenswrapper[29458]: I0308 22:13:59.583271 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-session\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583484 master-0 kubenswrapper[29458]: I0308 22:13:59.583359 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583484 master-0 kubenswrapper[29458]: I0308 22:13:59.583383 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583484 master-0 kubenswrapper[29458]: I0308 22:13:59.583433 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-login\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583605 master-0 kubenswrapper[29458]: I0308 22:13:59.583543 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-dir\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583655 master-0 kubenswrapper[29458]: I0308 22:13:59.583634 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583709 master-0 kubenswrapper[29458]: I0308 22:13:59.583689 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583753 master-0 kubenswrapper[29458]: I0308 22:13:59.583736 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.583815 master-0 kubenswrapper[29458]: I0308 22:13:59.583791 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-policies\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.588107 master-0 kubenswrapper[29458]: I0308 22:13:59.588013 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"c7ccc6bae6a76ea5d8eb41fb99713b49f4f8866b716eafa95428db0256e3f1fb"} Mar 08 22:13:59.588176 master-0 kubenswrapper[29458]: I0308 22:13:59.588127 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"1a49c368b724b16fe3ffa9efa5e0386be47fdd7d0fd666e343fb3c87ae9e1850"} Mar 08 22:13:59.588176 master-0 kubenswrapper[29458]: I0308 22:13:59.588147 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"c556c17320b5d54b9a7b0cee7eb817ed2e58995cfaea6a63d93fe0dc553fc48a"} Mar 08 22:13:59.600205 master-0 kubenswrapper[29458]: I0308 22:13:59.600150 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 22:13:59.620889 master-0 kubenswrapper[29458]: I0308 22:13:59.620820 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 22:13:59.641227 master-0 kubenswrapper[29458]: I0308 22:13:59.641119 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-bk27b" Mar 08 22:13:59.660457 master-0 kubenswrapper[29458]: I0308 22:13:59.660371 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 22:13:59.680421 master-0 kubenswrapper[29458]: I0308 22:13:59.680359 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 22:13:59.685154 master-0 kubenswrapper[29458]: I0308 22:13:59.685047 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-login\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685154 master-0 kubenswrapper[29458]: I0308 22:13:59.685143 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-dir\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685281 master-0 kubenswrapper[29458]: I0308 22:13:59.685189 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685281 master-0 kubenswrapper[29458]: I0308 22:13:59.685223 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685281 master-0 kubenswrapper[29458]: I0308 22:13:59.685245 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685281 master-0 kubenswrapper[29458]: I0308 22:13:59.685281 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-policies\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685475 master-0 kubenswrapper[29458]: I0308 22:13:59.685346 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685475 master-0 kubenswrapper[29458]: I0308 22:13:59.685376 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgc2l\" (UniqueName: \"kubernetes.io/projected/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-kube-api-access-vgc2l\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685475 master-0 kubenswrapper[29458]: I0308 22:13:59.685403 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-error\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685475 master-0 kubenswrapper[29458]: I0308 22:13:59.685431 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685475 master-0 kubenswrapper[29458]: I0308 22:13:59.685459 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-session\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685660 master-0 kubenswrapper[29458]: I0308 22:13:59.685521 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.685660 master-0 kubenswrapper[29458]: I0308 22:13:59.685549 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.686246 master-0 kubenswrapper[29458]: I0308 22:13:59.686199 29458 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 22:13:59.687710 master-0 kubenswrapper[29458]: I0308 22:13:59.687664 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-dir\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.688019 master-0 kubenswrapper[29458]: I0308 22:13:59.687972 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.691026 master-0 kubenswrapper[29458]: I0308 22:13:59.690992 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.691335 master-0 kubenswrapper[29458]: I0308 22:13:59.691301 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-login\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.692199 master-0 kubenswrapper[29458]: I0308 22:13:59.692164 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.693029 master-0 kubenswrapper[29458]: I0308 22:13:59.692471 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-session\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.693029 master-0 kubenswrapper[29458]: I0308 22:13:59.692549 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.694811 master-0 kubenswrapper[29458]: I0308 22:13:59.694089 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-error\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.698497 master-0 kubenswrapper[29458]: I0308 22:13:59.697920 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.711368 master-0 kubenswrapper[29458]: I0308 22:13:59.711302 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 22:13:59.719827 master-0 kubenswrapper[29458]: I0308 22:13:59.719744 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.722162 master-0 kubenswrapper[29458]: I0308 22:13:59.721920 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 22:13:59.740629 master-0 kubenswrapper[29458]: I0308 22:13:59.740558 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 22:13:59.761148 master-0 kubenswrapper[29458]: I0308 22:13:59.761095 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 22:13:59.768251 master-0 kubenswrapper[29458]: I0308 22:13:59.768211 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-policies\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.781252 master-0 kubenswrapper[29458]: I0308 22:13:59.781195 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 22:13:59.789361 master-0 kubenswrapper[29458]: I0308 22:13:59.789324 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.843618 master-0 kubenswrapper[29458]: I0308 22:13:59.843545 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgc2l\" (UniqueName: \"kubernetes.io/projected/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-kube-api-access-vgc2l\") pod \"oauth-openshift-75c47c75-b4x7d\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.849460 master-0 kubenswrapper[29458]: I0308 22:13:59.849423 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:13:59.936456 master-0 kubenswrapper[29458]: I0308 22:13:59.936407 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:13:59.936673 master-0 kubenswrapper[29458]: I0308 22:13:59.936527 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:13:59.956293 master-0 kubenswrapper[29458]: I0308 22:13:59.956216 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-f988cd549-68kmh" Mar 08 22:14:00.083059 master-0 kubenswrapper[29458]: I0308 22:14:00.082908 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=8.082877681 podStartE2EDuration="8.082877681s" podCreationTimestamp="2026-03-08 22:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:00.080042344 +0000 UTC m=+9.368099936" watchObservedRunningTime="2026-03-08 22:14:00.082877681 +0000 UTC m=+9.370935263" Mar 08 22:14:00.347132 master-0 kubenswrapper[29458]: I0308 22:14:00.347013 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75c47c75-b4x7d"] Mar 08 22:14:00.358687 master-0 kubenswrapper[29458]: I0308 22:14:00.358644 29458 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 22:14:00.425462 master-0 kubenswrapper[29458]: I0308 22:14:00.424694 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:14:00.450938 master-0 kubenswrapper[29458]: I0308 22:14:00.450814 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=8.450778758 podStartE2EDuration="8.450778758s" podCreationTimestamp="2026-03-08 22:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:00.431467991 +0000 UTC m=+9.719525623" watchObservedRunningTime="2026-03-08 22:14:00.450778758 +0000 UTC m=+9.738836390" Mar 08 22:14:00.483698 master-0 kubenswrapper[29458]: I0308 22:14:00.482638 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:14:00.573361 master-0 kubenswrapper[29458]: I0308 22:14:00.573306 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:14:00.573588 master-0 kubenswrapper[29458]: I0308 22:14:00.573524 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:00.573588 master-0 kubenswrapper[29458]: I0308 22:14:00.573540 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:00.612901 master-0 kubenswrapper[29458]: I0308 22:14:00.612778 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:14:00.622369 master-0 kubenswrapper[29458]: I0308 22:14:00.621302 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:00.622596 master-0 kubenswrapper[29458]: I0308 22:14:00.622540 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" event={"ID":"2ef568b8-cb7e-47ac-ab67-dc3058c2e374","Type":"ContainerStarted","Data":"b52d2aa94cdc6ba996dbed1331a5cbf88e8befe42d5cd859c848fd1cea1bc343"} Mar 08 22:14:00.623068 master-0 kubenswrapper[29458]: I0308 22:14:00.623038 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:00.674298 master-0 kubenswrapper[29458]: I0308 22:14:00.674244 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8ctpt" Mar 08 22:14:00.817494 master-0 kubenswrapper[29458]: I0308 22:14:00.817389 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:14:00.817716 master-0 kubenswrapper[29458]: E0308 22:14:00.817655 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:00.817716 master-0 kubenswrapper[29458]: E0308 22:14:00.817698 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:00.817812 master-0 kubenswrapper[29458]: E0308 22:14:00.817768 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:14:04.817741638 +0000 UTC m=+14.105799230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:00.835343 master-0 kubenswrapper[29458]: I0308 22:14:00.835245 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.835214084 podStartE2EDuration="3.835214084s" podCreationTimestamp="2026-03-08 22:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:00.833396344 +0000 UTC m=+10.121453936" watchObservedRunningTime="2026-03-08 22:14:00.835214084 +0000 UTC m=+10.123271676" Mar 08 22:14:01.411064 master-0 kubenswrapper[29458]: I0308 22:14:01.411004 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6f9445b8fd-w44n6" Mar 08 22:14:01.423338 master-0 kubenswrapper[29458]: I0308 22:14:01.423285 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6bf768964c-srxfg" Mar 08 22:14:01.949103 master-0 kubenswrapper[29458]: I0308 22:14:01.948814 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:14:01.949103 master-0 kubenswrapper[29458]: I0308 22:14:01.948943 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:01.959102 master-0 kubenswrapper[29458]: I0308 22:14:01.955572 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-4fsdl" Mar 08 22:14:02.254497 master-0 kubenswrapper[29458]: I0308 22:14:02.249196 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:14:02.401462 master-0 kubenswrapper[29458]: I0308 22:14:02.401364 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:14:02.477097 master-0 kubenswrapper[29458]: I0308 22:14:02.476380 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:14:02.707443 master-0 kubenswrapper[29458]: I0308 22:14:02.707382 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 22:14:02.707687 master-0 kubenswrapper[29458]: I0308 22:14:02.707515 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:02.714547 master-0 kubenswrapper[29458]: I0308 22:14:02.714506 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-djlff" Mar 08 22:14:02.723184 master-0 kubenswrapper[29458]: I0308 22:14:02.723143 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8w7wm" Mar 08 22:14:02.929098 master-0 kubenswrapper[29458]: I0308 22:14:02.924559 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 22:14:02.929098 master-0 kubenswrapper[29458]: I0308 22:14:02.924707 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:02.941098 master-0 kubenswrapper[29458]: I0308 22:14:02.938623 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-6q5t2" Mar 08 22:14:03.003994 master-0 kubenswrapper[29458]: I0308 22:14:03.003286 29458 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:14:03.003994 master-0 kubenswrapper[29458]: I0308 22:14:03.003632 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" containerID="cri-o://fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695" gracePeriod=5 Mar 08 22:14:03.004724 master-0 kubenswrapper[29458]: I0308 22:14:03.004675 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:14:03.004784 master-0 kubenswrapper[29458]: I0308 22:14:03.004768 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:03.005710 master-0 kubenswrapper[29458]: I0308 22:14:03.005669 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:14:03.012513 master-0 kubenswrapper[29458]: I0308 22:14:03.012483 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:14:03.122097 master-0 kubenswrapper[29458]: I0308 22:14:03.118727 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:14:03.122097 master-0 kubenswrapper[29458]: I0308 22:14:03.118858 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:03.134090 master-0 kubenswrapper[29458]: I0308 22:14:03.131729 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:14:03.141808 master-0 kubenswrapper[29458]: I0308 22:14:03.141590 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:14:03.691918 master-0 kubenswrapper[29458]: I0308 22:14:03.691846 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:14:03.767681 master-0 kubenswrapper[29458]: I0308 22:14:03.767611 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mg95b" Mar 08 22:14:03.851112 master-0 kubenswrapper[29458]: I0308 22:14:03.847619 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:14:03.935410 master-0 kubenswrapper[29458]: I0308 22:14:03.935328 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:14:04.005750 master-0 kubenswrapper[29458]: I0308 22:14:04.005610 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:14:04.006304 master-0 kubenswrapper[29458]: I0308 22:14:04.005849 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:04.018045 master-0 kubenswrapper[29458]: I0308 22:14:04.015089 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-5ljhh" Mar 08 22:14:04.191141 master-0 kubenswrapper[29458]: I0308 22:14:04.191089 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:14:04.191581 master-0 kubenswrapper[29458]: I0308 22:14:04.191564 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:04.198485 master-0 kubenswrapper[29458]: I0308 22:14:04.198229 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-qv4bv" Mar 08 22:14:04.369198 master-0 kubenswrapper[29458]: I0308 22:14:04.369141 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 22:14:04.369488 master-0 kubenswrapper[29458]: I0308 22:14:04.369291 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:04.374455 master-0 kubenswrapper[29458]: I0308 22:14:04.374424 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-xqh7x" Mar 08 22:14:04.416056 master-0 kubenswrapper[29458]: I0308 22:14:04.416006 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:14:04.416591 master-0 kubenswrapper[29458]: I0308 22:14:04.416575 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:04.421115 master-0 kubenswrapper[29458]: I0308 22:14:04.420672 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-nk294" Mar 08 22:14:04.612584 master-0 kubenswrapper[29458]: I0308 22:14:04.612521 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:14:04.686465 master-0 kubenswrapper[29458]: I0308 22:14:04.683409 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47cmq" Mar 08 22:14:04.872431 master-0 kubenswrapper[29458]: I0308 22:14:04.872352 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:14:04.873068 master-0 kubenswrapper[29458]: E0308 22:14:04.873048 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:04.873181 master-0 kubenswrapper[29458]: E0308 22:14:04.873161 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:04.873913 master-0 kubenswrapper[29458]: E0308 22:14:04.873349 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:14:12.87332807 +0000 UTC m=+22.161385662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:05.378887 master-0 kubenswrapper[29458]: I0308 22:14:05.378818 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:14:05.379467 master-0 kubenswrapper[29458]: I0308 22:14:05.378978 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:05.385702 master-0 kubenswrapper[29458]: I0308 22:14:05.385644 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-x5zxr" Mar 08 22:14:06.681196 master-0 kubenswrapper[29458]: I0308 22:14:06.681124 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" event={"ID":"2ef568b8-cb7e-47ac-ab67-dc3058c2e374","Type":"ContainerStarted","Data":"77dcc71cefdd2f6800116dae9e3186f13736e8f9c3747e3cefc96b68c0027b3f"} Mar 08 22:14:06.704752 master-0 kubenswrapper[29458]: I0308 22:14:06.704635 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" podStartSLOduration=4.780990507 podStartE2EDuration="9.704613265s" podCreationTimestamp="2026-03-08 22:13:57 +0000 UTC" firstStartedPulling="2026-03-08 22:14:00.358537954 +0000 UTC m=+9.646595546" lastFinishedPulling="2026-03-08 22:14:05.282160712 +0000 UTC m=+14.570218304" observedRunningTime="2026-03-08 22:14:06.701706456 +0000 UTC m=+15.989764058" watchObservedRunningTime="2026-03-08 22:14:06.704613265 +0000 UTC m=+15.992670857" Mar 08 22:14:07.689143 master-0 kubenswrapper[29458]: I0308 22:14:07.689049 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:14:07.696471 master-0 kubenswrapper[29458]: I0308 22:14:07.696411 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:14:08.617168 master-0 kubenswrapper[29458]: I0308 22:14:08.617096 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_acbb43bf2cf27ed60d1f635fd6638ac7/startup-monitor/0.log" Mar 08 22:14:08.617406 master-0 kubenswrapper[29458]: I0308 22:14:08.617228 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:14:08.697625 master-0 kubenswrapper[29458]: I0308 22:14:08.697557 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_acbb43bf2cf27ed60d1f635fd6638ac7/startup-monitor/0.log" Mar 08 22:14:08.698182 master-0 kubenswrapper[29458]: I0308 22:14:08.697664 29458 generic.go:334] "Generic (PLEG): container finished" podID="acbb43bf2cf27ed60d1f635fd6638ac7" containerID="fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695" exitCode=137 Mar 08 22:14:08.698182 master-0 kubenswrapper[29458]: I0308 22:14:08.697818 29458 scope.go:117] "RemoveContainer" containerID="fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695" Mar 08 22:14:08.698182 master-0 kubenswrapper[29458]: I0308 22:14:08.697813 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:14:08.718043 master-0 kubenswrapper[29458]: I0308 22:14:08.717989 29458 scope.go:117] "RemoveContainer" containerID="fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695" Mar 08 22:14:08.718759 master-0 kubenswrapper[29458]: E0308 22:14:08.718674 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695\": container with ID starting with fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695 not found: ID does not exist" containerID="fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695" Mar 08 22:14:08.718906 master-0 kubenswrapper[29458]: I0308 22:14:08.718784 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695"} err="failed to get container status \"fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695\": rpc error: code = NotFound desc = could not find container \"fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695\": container with ID starting with fdeb5fad707aa2fd286d6ec0c3bfa8d45fd4f299a386859f868108e7ec60d695 not found: ID does not exist" Mar 08 22:14:08.752591 master-0 kubenswrapper[29458]: I0308 22:14:08.752509 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 08 22:14:08.752591 master-0 kubenswrapper[29458]: I0308 22:14:08.752584 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 08 22:14:08.752914 master-0 kubenswrapper[29458]: I0308 22:14:08.752742 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:14:08.752914 master-0 kubenswrapper[29458]: I0308 22:14:08.752791 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 08 22:14:08.752914 master-0 kubenswrapper[29458]: I0308 22:14:08.752851 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 08 22:14:08.752914 master-0 kubenswrapper[29458]: I0308 22:14:08.752883 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 08 22:14:08.752914 master-0 kubenswrapper[29458]: I0308 22:14:08.752882 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests" (OuterVolumeSpecName: "manifests") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:14:08.753171 master-0 kubenswrapper[29458]: I0308 22:14:08.752957 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log" (OuterVolumeSpecName: "var-log") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:14:08.753171 master-0 kubenswrapper[29458]: I0308 22:14:08.753128 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock" (OuterVolumeSpecName: "var-lock") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:14:08.753567 master-0 kubenswrapper[29458]: I0308 22:14:08.753499 29458 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:08.753567 master-0 kubenswrapper[29458]: I0308 22:14:08.753544 29458 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:08.753567 master-0 kubenswrapper[29458]: I0308 22:14:08.753567 29458 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:08.753709 master-0 kubenswrapper[29458]: I0308 22:14:08.753585 29458 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:08.762324 master-0 kubenswrapper[29458]: I0308 22:14:08.762231 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:14:08.856463 master-0 kubenswrapper[29458]: I0308 22:14:08.856376 29458 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:08.982744 master-0 kubenswrapper[29458]: I0308 22:14:08.982664 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" path="/var/lib/kubelet/pods/acbb43bf2cf27ed60d1f635fd6638ac7/volumes" Mar 08 22:14:08.983052 master-0 kubenswrapper[29458]: I0308 22:14:08.982991 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 08 22:14:09.019000 master-0 kubenswrapper[29458]: I0308 22:14:09.018523 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:14:09.019000 master-0 kubenswrapper[29458]: I0308 22:14:09.018570 29458 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="444bf4e8-e74d-41db-8295-d9b99edec732" Mar 08 22:14:09.019469 master-0 kubenswrapper[29458]: I0308 22:14:09.019356 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:14:09.019469 master-0 kubenswrapper[29458]: I0308 22:14:09.019413 29458 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="444bf4e8-e74d-41db-8295-d9b99edec732" Mar 08 22:14:09.682248 master-0 kubenswrapper[29458]: I0308 22:14:09.682173 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:14:10.774931 master-0 kubenswrapper[29458]: I0308 22:14:10.774858 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4csw2"] Mar 08 22:14:10.775672 master-0 kubenswrapper[29458]: E0308 22:14:10.775186 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" Mar 08 22:14:10.775672 master-0 kubenswrapper[29458]: I0308 22:14:10.775201 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" Mar 08 22:14:10.775672 master-0 kubenswrapper[29458]: I0308 22:14:10.775346 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" Mar 08 22:14:10.775918 master-0 kubenswrapper[29458]: I0308 22:14:10.775895 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.779673 master-0 kubenswrapper[29458]: I0308 22:14:10.779626 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 08 22:14:10.779914 master-0 kubenswrapper[29458]: I0308 22:14:10.779888 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4ct7k" Mar 08 22:14:10.888662 master-0 kubenswrapper[29458]: I0308 22:14:10.888518 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ctd\" (UniqueName: \"kubernetes.io/projected/60f26e18-ce24-4aa8-b33f-fad5a01e997e-kube-api-access-76ctd\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.888968 master-0 kubenswrapper[29458]: I0308 22:14:10.888702 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60f26e18-ce24-4aa8-b33f-fad5a01e997e-serviceca\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.888968 master-0 kubenswrapper[29458]: I0308 22:14:10.888828 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60f26e18-ce24-4aa8-b33f-fad5a01e997e-host\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.990889 master-0 kubenswrapper[29458]: I0308 22:14:10.990791 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ctd\" (UniqueName: \"kubernetes.io/projected/60f26e18-ce24-4aa8-b33f-fad5a01e997e-kube-api-access-76ctd\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.991306 master-0 kubenswrapper[29458]: I0308 22:14:10.990916 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60f26e18-ce24-4aa8-b33f-fad5a01e997e-serviceca\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.991306 master-0 kubenswrapper[29458]: I0308 22:14:10.991089 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60f26e18-ce24-4aa8-b33f-fad5a01e997e-host\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.991306 master-0 kubenswrapper[29458]: I0308 22:14:10.991241 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60f26e18-ce24-4aa8-b33f-fad5a01e997e-host\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:10.991695 master-0 kubenswrapper[29458]: I0308 22:14:10.991651 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60f26e18-ce24-4aa8-b33f-fad5a01e997e-serviceca\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:11.013507 master-0 kubenswrapper[29458]: I0308 22:14:11.013438 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ctd\" (UniqueName: \"kubernetes.io/projected/60f26e18-ce24-4aa8-b33f-fad5a01e997e-kube-api-access-76ctd\") pod \"node-ca-4csw2\" (UID: \"60f26e18-ce24-4aa8-b33f-fad5a01e997e\") " pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:11.096960 master-0 kubenswrapper[29458]: I0308 22:14:11.096886 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4csw2" Mar 08 22:14:11.739125 master-0 kubenswrapper[29458]: I0308 22:14:11.739032 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4csw2" event={"ID":"60f26e18-ce24-4aa8-b33f-fad5a01e997e","Type":"ContainerStarted","Data":"39ee87fca30722c19d3efe9af07e862cebe991754b947b254cca83f8f5098351"} Mar 08 22:14:12.927991 master-0 kubenswrapper[29458]: I0308 22:14:12.927120 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:14:12.927991 master-0 kubenswrapper[29458]: E0308 22:14:12.927374 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:12.927991 master-0 kubenswrapper[29458]: E0308 22:14:12.927428 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:12.927991 master-0 kubenswrapper[29458]: E0308 22:14:12.927521 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:14:28.92749045 +0000 UTC m=+38.215548042 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:13.666480 master-0 kubenswrapper[29458]: I0308 22:14:13.666419 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:14:13.671823 master-0 kubenswrapper[29458]: I0308 22:14:13.671779 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:14:13.757110 master-0 kubenswrapper[29458]: I0308 22:14:13.757020 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:14:14.770629 master-0 kubenswrapper[29458]: I0308 22:14:14.770537 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4csw2" event={"ID":"60f26e18-ce24-4aa8-b33f-fad5a01e997e","Type":"ContainerStarted","Data":"640144f573363bc5306eb9004319fc51f8f83deeeb4c30102b8a54dbb4373235"} Mar 08 22:14:14.798654 master-0 kubenswrapper[29458]: I0308 22:14:14.798559 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4csw2" podStartSLOduration=2.410308816 podStartE2EDuration="4.798345084s" podCreationTimestamp="2026-03-08 22:14:10 +0000 UTC" firstStartedPulling="2026-03-08 22:14:11.126832529 +0000 UTC m=+20.414890151" lastFinishedPulling="2026-03-08 22:14:13.514868827 +0000 UTC m=+22.802926419" observedRunningTime="2026-03-08 22:14:14.795595259 +0000 UTC m=+24.083652851" watchObservedRunningTime="2026-03-08 22:14:14.798345084 +0000 UTC m=+24.086402666" Mar 08 22:14:20.876782 master-0 kubenswrapper[29458]: I0308 22:14:20.876697 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-75c47c75-b4x7d"] Mar 08 22:14:24.602292 master-0 kubenswrapper[29458]: I0308 22:14:24.602197 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:14:24.602856 master-0 kubenswrapper[29458]: I0308 22:14:24.602567 29458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 22:14:24.639398 master-0 kubenswrapper[29458]: I0308 22:14:24.639329 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4d2r" Mar 08 22:14:28.105678 master-0 kubenswrapper[29458]: I0308 22:14:28.105613 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9"] Mar 08 22:14:28.106741 master-0 kubenswrapper[29458]: I0308 22:14:28.106707 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:28.109229 master-0 kubenswrapper[29458]: I0308 22:14:28.109160 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-js89c" Mar 08 22:14:28.109366 master-0 kubenswrapper[29458]: I0308 22:14:28.109202 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 08 22:14:28.125100 master-0 kubenswrapper[29458]: I0308 22:14:28.125021 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9"] Mar 08 22:14:28.185329 master-0 kubenswrapper[29458]: I0308 22:14:28.185202 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/28f06482-e8d8-4bc5-b4fd-c0e35f0a0136-monitoring-plugin-cert\") pod \"monitoring-plugin-dd79cbb67-m6lt9\" (UID: \"28f06482-e8d8-4bc5-b4fd-c0e35f0a0136\") " pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:28.286739 master-0 kubenswrapper[29458]: I0308 22:14:28.286674 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/28f06482-e8d8-4bc5-b4fd-c0e35f0a0136-monitoring-plugin-cert\") pod \"monitoring-plugin-dd79cbb67-m6lt9\" (UID: \"28f06482-e8d8-4bc5-b4fd-c0e35f0a0136\") " pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:28.292907 master-0 kubenswrapper[29458]: I0308 22:14:28.292834 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/28f06482-e8d8-4bc5-b4fd-c0e35f0a0136-monitoring-plugin-cert\") pod \"monitoring-plugin-dd79cbb67-m6lt9\" (UID: \"28f06482-e8d8-4bc5-b4fd-c0e35f0a0136\") " pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:28.428429 master-0 kubenswrapper[29458]: I0308 22:14:28.428302 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:28.900488 master-0 kubenswrapper[29458]: I0308 22:14:28.900369 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9"] Mar 08 22:14:28.907969 master-0 kubenswrapper[29458]: W0308 22:14:28.907903 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28f06482_e8d8_4bc5_b4fd_c0e35f0a0136.slice/crio-ac791661cc428dc3ac8ffbee6d06e3801bfed46c02d2d9ac4b55f5f9e5a32617 WatchSource:0}: Error finding container ac791661cc428dc3ac8ffbee6d06e3801bfed46c02d2d9ac4b55f5f9e5a32617: Status 404 returned error can't find the container with id ac791661cc428dc3ac8ffbee6d06e3801bfed46c02d2d9ac4b55f5f9e5a32617 Mar 08 22:14:28.996599 master-0 kubenswrapper[29458]: I0308 22:14:28.996519 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:14:28.996835 master-0 kubenswrapper[29458]: E0308 22:14:28.996790 29458 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:28.996835 master-0 kubenswrapper[29458]: E0308 22:14:28.996813 29458 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:28.996898 master-0 kubenswrapper[29458]: E0308 22:14:28.996866 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access podName:1d188983-1f19-4c8e-b604-034bd6308139 nodeName:}" failed. No retries permitted until 2026-03-08 22:15:00.996848678 +0000 UTC m=+70.284906270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access") pod "installer-2-master-0" (UID: "1d188983-1f19-4c8e-b604-034bd6308139") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 22:14:29.886980 master-0 kubenswrapper[29458]: I0308 22:14:29.886816 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" event={"ID":"28f06482-e8d8-4bc5-b4fd-c0e35f0a0136","Type":"ContainerStarted","Data":"ac791661cc428dc3ac8ffbee6d06e3801bfed46c02d2d9ac4b55f5f9e5a32617"} Mar 08 22:14:30.894805 master-0 kubenswrapper[29458]: I0308 22:14:30.894734 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" event={"ID":"28f06482-e8d8-4bc5-b4fd-c0e35f0a0136","Type":"ContainerStarted","Data":"1ef530b168e65d57f008c31148547b958183aa2a720d5d7d0356d2151ab34c7c"} Mar 08 22:14:30.895400 master-0 kubenswrapper[29458]: I0308 22:14:30.895031 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:30.903267 master-0 kubenswrapper[29458]: I0308 22:14:30.903219 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" Mar 08 22:14:30.919166 master-0 kubenswrapper[29458]: I0308 22:14:30.919021 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-dd79cbb67-m6lt9" podStartSLOduration=1.340754351 podStartE2EDuration="2.918990211s" podCreationTimestamp="2026-03-08 22:14:28 +0000 UTC" firstStartedPulling="2026-03-08 22:14:28.909743535 +0000 UTC m=+38.197801127" lastFinishedPulling="2026-03-08 22:14:30.487979395 +0000 UTC m=+39.776036987" observedRunningTime="2026-03-08 22:14:30.914834517 +0000 UTC m=+40.202892129" watchObservedRunningTime="2026-03-08 22:14:30.918990211 +0000 UTC m=+40.207047843" Mar 08 22:14:41.019127 master-0 kubenswrapper[29458]: I0308 22:14:41.017822 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f7df5f5b-txsrq"] Mar 08 22:14:41.019127 master-0 kubenswrapper[29458]: I0308 22:14:41.018569 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" containerID="cri-o://85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0" gracePeriod=30 Mar 08 22:14:41.027783 master-0 kubenswrapper[29458]: I0308 22:14:41.026063 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 08 22:14:41.027783 master-0 kubenswrapper[29458]: I0308 22:14:41.027327 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.031623 master-0 kubenswrapper[29458]: I0308 22:14:41.031553 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-jszd4" Mar 08 22:14:41.035439 master-0 kubenswrapper[29458]: I0308 22:14:41.034065 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 22:14:41.051292 master-0 kubenswrapper[29458]: I0308 22:14:41.049674 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 08 22:14:41.070111 master-0 kubenswrapper[29458]: I0308 22:14:41.069788 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k"] Mar 08 22:14:41.070268 master-0 kubenswrapper[29458]: I0308 22:14:41.070157 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" containerID="cri-o://8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31" gracePeriod=30 Mar 08 22:14:41.106844 master-0 kubenswrapper[29458]: I0308 22:14:41.105797 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.106844 master-0 kubenswrapper[29458]: I0308 22:14:41.105893 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.106844 master-0 kubenswrapper[29458]: I0308 22:14:41.105948 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-var-lock\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.207350 master-0 kubenswrapper[29458]: I0308 22:14:41.207148 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.207350 master-0 kubenswrapper[29458]: I0308 22:14:41.207240 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.207350 master-0 kubenswrapper[29458]: I0308 22:14:41.207285 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-var-lock\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.207479 master-0 kubenswrapper[29458]: I0308 22:14:41.207372 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-var-lock\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.207479 master-0 kubenswrapper[29458]: I0308 22:14:41.207423 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.238722 master-0 kubenswrapper[29458]: I0308 22:14:41.237293 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.355821 master-0 kubenswrapper[29458]: I0308 22:14:41.355745 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:14:41.828980 master-0 kubenswrapper[29458]: I0308 22:14:41.828901 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:14:41.850115 master-0 kubenswrapper[29458]: I0308 22:14:41.849575 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 08 22:14:41.925744 master-0 kubenswrapper[29458]: I0308 22:14:41.925665 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") pod \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " Mar 08 22:14:41.926113 master-0 kubenswrapper[29458]: I0308 22:14:41.925829 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") pod \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " Mar 08 22:14:41.926113 master-0 kubenswrapper[29458]: I0308 22:14:41.925860 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") pod \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " Mar 08 22:14:41.926113 master-0 kubenswrapper[29458]: I0308 22:14:41.925902 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") pod \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\" (UID: \"da51940a-a38f-4baf-9c14-b2f1f46b5aed\") " Mar 08 22:14:41.926738 master-0 kubenswrapper[29458]: I0308 22:14:41.926695 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config" (OuterVolumeSpecName: "config") pod "da51940a-a38f-4baf-9c14-b2f1f46b5aed" (UID: "da51940a-a38f-4baf-9c14-b2f1f46b5aed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:41.927548 master-0 kubenswrapper[29458]: I0308 22:14:41.927483 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca" (OuterVolumeSpecName: "client-ca") pod "da51940a-a38f-4baf-9c14-b2f1f46b5aed" (UID: "da51940a-a38f-4baf-9c14-b2f1f46b5aed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:41.929686 master-0 kubenswrapper[29458]: I0308 22:14:41.929641 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk" (OuterVolumeSpecName: "kube-api-access-clxsk") pod "da51940a-a38f-4baf-9c14-b2f1f46b5aed" (UID: "da51940a-a38f-4baf-9c14-b2f1f46b5aed"). InnerVolumeSpecName "kube-api-access-clxsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:14:41.930712 master-0 kubenswrapper[29458]: I0308 22:14:41.930670 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da51940a-a38f-4baf-9c14-b2f1f46b5aed" (UID: "da51940a-a38f-4baf-9c14-b2f1f46b5aed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:41.980906 master-0 kubenswrapper[29458]: I0308 22:14:41.980824 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:14:42.016649 master-0 kubenswrapper[29458]: I0308 22:14:42.016586 29458 generic.go:334] "Generic (PLEG): container finished" podID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerID="8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31" exitCode=0 Mar 08 22:14:42.016957 master-0 kubenswrapper[29458]: I0308 22:14:42.016673 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerDied","Data":"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31"} Mar 08 22:14:42.017114 master-0 kubenswrapper[29458]: I0308 22:14:42.016771 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" Mar 08 22:14:42.017184 master-0 kubenswrapper[29458]: I0308 22:14:42.017101 29458 scope.go:117] "RemoveContainer" containerID="8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31" Mar 08 22:14:42.017344 master-0 kubenswrapper[29458]: I0308 22:14:42.017059 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k" event={"ID":"da51940a-a38f-4baf-9c14-b2f1f46b5aed","Type":"ContainerDied","Data":"49a678c1404278a258bd5f7da531aa1c8094425dc0f885e61d43b5bf65b98923"} Mar 08 22:14:42.022224 master-0 kubenswrapper[29458]: I0308 22:14:42.022168 29458 generic.go:334] "Generic (PLEG): container finished" podID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerID="85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0" exitCode=0 Mar 08 22:14:42.022723 master-0 kubenswrapper[29458]: I0308 22:14:42.022247 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerDied","Data":"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0"} Mar 08 22:14:42.022723 master-0 kubenswrapper[29458]: I0308 22:14:42.022279 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" event={"ID":"2395900a-ff6b-46ff-92c6-a8a1b5675b67","Type":"ContainerDied","Data":"556cd17b0dd9a0437b38f51d3f691ed442f4e900ac26991a4d6a0e87a7a93e20"} Mar 08 22:14:42.022723 master-0 kubenswrapper[29458]: I0308 22:14:42.022324 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f7df5f5b-txsrq" Mar 08 22:14:42.031786 master-0 kubenswrapper[29458]: I0308 22:14:42.031727 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") pod \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " Mar 08 22:14:42.032107 master-0 kubenswrapper[29458]: I0308 22:14:42.032091 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") pod \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " Mar 08 22:14:42.032242 master-0 kubenswrapper[29458]: I0308 22:14:42.032229 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") pod \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " Mar 08 22:14:42.032351 master-0 kubenswrapper[29458]: I0308 22:14:42.032338 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") pod \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " Mar 08 22:14:42.032454 master-0 kubenswrapper[29458]: I0308 22:14:42.032439 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") pod \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\" (UID: \"2395900a-ff6b-46ff-92c6-a8a1b5675b67\") " Mar 08 22:14:42.032658 master-0 kubenswrapper[29458]: I0308 22:14:42.032617 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config" (OuterVolumeSpecName: "config") pod "2395900a-ff6b-46ff-92c6-a8a1b5675b67" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:42.033000 master-0 kubenswrapper[29458]: I0308 22:14:42.032454 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19","Type":"ContainerStarted","Data":"0d2280ac16362d670434fdae96e3e2d711c7678f350ddd00219eadd6fdceb1ca"} Mar 08 22:14:42.033132 master-0 kubenswrapper[29458]: I0308 22:14:42.033114 29458 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da51940a-a38f-4baf-9c14-b2f1f46b5aed-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.033243 master-0 kubenswrapper[29458]: I0308 22:14:42.033228 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clxsk\" (UniqueName: \"kubernetes.io/projected/da51940a-a38f-4baf-9c14-b2f1f46b5aed-kube-api-access-clxsk\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.033321 master-0 kubenswrapper[29458]: I0308 22:14:42.033309 29458 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.033387 master-0 kubenswrapper[29458]: I0308 22:14:42.033376 29458 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da51940a-a38f-4baf-9c14-b2f1f46b5aed-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.033450 master-0 kubenswrapper[29458]: I0308 22:14:42.033439 29458 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.033519 master-0 kubenswrapper[29458]: I0308 22:14:42.033116 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca" (OuterVolumeSpecName: "client-ca") pod "2395900a-ff6b-46ff-92c6-a8a1b5675b67" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:42.033699 master-0 kubenswrapper[29458]: I0308 22:14:42.033654 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2395900a-ff6b-46ff-92c6-a8a1b5675b67" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:42.035720 master-0 kubenswrapper[29458]: I0308 22:14:42.035660 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc" (OuterVolumeSpecName: "kube-api-access-7v6dc") pod "2395900a-ff6b-46ff-92c6-a8a1b5675b67" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67"). InnerVolumeSpecName "kube-api-access-7v6dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:14:42.035849 master-0 kubenswrapper[29458]: I0308 22:14:42.035799 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2395900a-ff6b-46ff-92c6-a8a1b5675b67" (UID: "2395900a-ff6b-46ff-92c6-a8a1b5675b67"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:42.055921 master-0 kubenswrapper[29458]: I0308 22:14:42.055486 29458 scope.go:117] "RemoveContainer" containerID="2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39" Mar 08 22:14:42.061682 master-0 kubenswrapper[29458]: I0308 22:14:42.061633 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k"] Mar 08 22:14:42.071900 master-0 kubenswrapper[29458]: I0308 22:14:42.071861 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86888d445f-7f74k"] Mar 08 22:14:42.078029 master-0 kubenswrapper[29458]: I0308 22:14:42.077990 29458 scope.go:117] "RemoveContainer" containerID="8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31" Mar 08 22:14:42.078675 master-0 kubenswrapper[29458]: E0308 22:14:42.078611 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31\": container with ID starting with 8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31 not found: ID does not exist" containerID="8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31" Mar 08 22:14:42.078742 master-0 kubenswrapper[29458]: I0308 22:14:42.078648 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31"} err="failed to get container status \"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31\": rpc error: code = NotFound desc = could not find container \"8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31\": container with ID starting with 8bf16092f9b3870167e38461525b39cb506803114455041ce9e1a82a40465b31 not found: ID does not exist" Mar 08 22:14:42.078742 master-0 kubenswrapper[29458]: I0308 22:14:42.078690 29458 scope.go:117] "RemoveContainer" containerID="2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39" Mar 08 22:14:42.079033 master-0 kubenswrapper[29458]: E0308 22:14:42.078984 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39\": container with ID starting with 2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39 not found: ID does not exist" containerID="2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39" Mar 08 22:14:42.079033 master-0 kubenswrapper[29458]: I0308 22:14:42.079015 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39"} err="failed to get container status \"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39\": rpc error: code = NotFound desc = could not find container \"2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39\": container with ID starting with 2d58ac2e5c53847ce4d3e3a5eec0022908a1efa2318d790d98db630d929ded39 not found: ID does not exist" Mar 08 22:14:42.079033 master-0 kubenswrapper[29458]: I0308 22:14:42.079034 29458 scope.go:117] "RemoveContainer" containerID="85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0" Mar 08 22:14:42.116571 master-0 kubenswrapper[29458]: I0308 22:14:42.116527 29458 scope.go:117] "RemoveContainer" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" Mar 08 22:14:42.135181 master-0 kubenswrapper[29458]: I0308 22:14:42.135130 29458 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2395900a-ff6b-46ff-92c6-a8a1b5675b67-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.135181 master-0 kubenswrapper[29458]: I0308 22:14:42.135158 29458 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.135181 master-0 kubenswrapper[29458]: I0308 22:14:42.135169 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6dc\" (UniqueName: \"kubernetes.io/projected/2395900a-ff6b-46ff-92c6-a8a1b5675b67-kube-api-access-7v6dc\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.135181 master-0 kubenswrapper[29458]: I0308 22:14:42.135178 29458 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2395900a-ff6b-46ff-92c6-a8a1b5675b67-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:42.145126 master-0 kubenswrapper[29458]: I0308 22:14:42.145044 29458 scope.go:117] "RemoveContainer" containerID="85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0" Mar 08 22:14:42.145897 master-0 kubenswrapper[29458]: E0308 22:14:42.145846 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0\": container with ID starting with 85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0 not found: ID does not exist" containerID="85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0" Mar 08 22:14:42.145897 master-0 kubenswrapper[29458]: I0308 22:14:42.145881 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0"} err="failed to get container status \"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0\": rpc error: code = NotFound desc = could not find container \"85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0\": container with ID starting with 85005f16c583a4ab5c5a867ba6783fd03ab5c53c3034265ad8c5c484c0e889f0 not found: ID does not exist" Mar 08 22:14:42.146087 master-0 kubenswrapper[29458]: I0308 22:14:42.145909 29458 scope.go:117] "RemoveContainer" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" Mar 08 22:14:42.146357 master-0 kubenswrapper[29458]: E0308 22:14:42.146314 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26\": container with ID starting with 8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26 not found: ID does not exist" containerID="8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26" Mar 08 22:14:42.146443 master-0 kubenswrapper[29458]: I0308 22:14:42.146348 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26"} err="failed to get container status \"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26\": rpc error: code = NotFound desc = could not find container \"8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26\": container with ID starting with 8f9e5a4db5da2d0ea86986733f581cc247c4f65a008f2bc00c1b7330c1022a26 not found: ID does not exist" Mar 08 22:14:42.379841 master-0 kubenswrapper[29458]: I0308 22:14:42.379764 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f7df5f5b-txsrq"] Mar 08 22:14:42.386999 master-0 kubenswrapper[29458]: I0308 22:14:42.386883 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f7df5f5b-txsrq"] Mar 08 22:14:42.538738 master-0 kubenswrapper[29458]: I0308 22:14:42.538624 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-db5b49478-n22wx"] Mar 08 22:14:42.539423 master-0 kubenswrapper[29458]: E0308 22:14:42.539364 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" Mar 08 22:14:42.539507 master-0 kubenswrapper[29458]: I0308 22:14:42.539436 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" Mar 08 22:14:42.539507 master-0 kubenswrapper[29458]: E0308 22:14:42.539480 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" Mar 08 22:14:42.539507 master-0 kubenswrapper[29458]: I0308 22:14:42.539501 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" Mar 08 22:14:42.539645 master-0 kubenswrapper[29458]: E0308 22:14:42.539572 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" Mar 08 22:14:42.539703 master-0 kubenswrapper[29458]: I0308 22:14:42.539594 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" Mar 08 22:14:42.539766 master-0 kubenswrapper[29458]: E0308 22:14:42.539743 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" Mar 08 22:14:42.539822 master-0 kubenswrapper[29458]: I0308 22:14:42.539765 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" Mar 08 22:14:42.540216 master-0 kubenswrapper[29458]: I0308 22:14:42.540174 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" Mar 08 22:14:42.540350 master-0 kubenswrapper[29458]: I0308 22:14:42.540243 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" Mar 08 22:14:42.540350 master-0 kubenswrapper[29458]: I0308 22:14:42.540273 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" containerName="route-controller-manager" Mar 08 22:14:42.541422 master-0 kubenswrapper[29458]: I0308 22:14:42.541384 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.543874 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw"] Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.544904 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" containerName="controller-manager" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.545554 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.546736 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.546787 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-z5x7c" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.547748 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.547832 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.547993 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.548023 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.548297 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.548629 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.549521 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-6d4tw" Mar 08 22:14:42.550421 master-0 kubenswrapper[29458]: I0308 22:14:42.549934 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 22:14:42.552887 master-0 kubenswrapper[29458]: I0308 22:14:42.550454 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 22:14:42.552887 master-0 kubenswrapper[29458]: I0308 22:14:42.550263 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 22:14:42.560203 master-0 kubenswrapper[29458]: I0308 22:14:42.556650 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 22:14:42.560203 master-0 kubenswrapper[29458]: I0308 22:14:42.559775 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw"] Mar 08 22:14:42.568120 master-0 kubenswrapper[29458]: I0308 22:14:42.568046 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-db5b49478-n22wx"] Mar 08 22:14:42.645759 master-0 kubenswrapper[29458]: I0308 22:14:42.645587 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b6b83f-22a9-445a-9a2c-5521fa7586ee-serving-cert\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.645759 master-0 kubenswrapper[29458]: I0308 22:14:42.645766 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-config\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.646314 master-0 kubenswrapper[29458]: I0308 22:14:42.645853 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4tp\" (UniqueName: \"kubernetes.io/projected/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-kube-api-access-px4tp\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.646314 master-0 kubenswrapper[29458]: I0308 22:14:42.646032 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b6b83f-22a9-445a-9a2c-5521fa7586ee-config\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.646314 master-0 kubenswrapper[29458]: I0308 22:14:42.646125 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-client-ca\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.646314 master-0 kubenswrapper[29458]: I0308 22:14:42.646251 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjfll\" (UniqueName: \"kubernetes.io/projected/16b6b83f-22a9-445a-9a2c-5521fa7586ee-kube-api-access-zjfll\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.646567 master-0 kubenswrapper[29458]: I0308 22:14:42.646366 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16b6b83f-22a9-445a-9a2c-5521fa7586ee-client-ca\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.646567 master-0 kubenswrapper[29458]: I0308 22:14:42.646417 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-proxy-ca-bundles\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.646567 master-0 kubenswrapper[29458]: I0308 22:14:42.646511 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-serving-cert\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.748195 master-0 kubenswrapper[29458]: I0308 22:14:42.747946 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16b6b83f-22a9-445a-9a2c-5521fa7586ee-client-ca\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.748195 master-0 kubenswrapper[29458]: I0308 22:14:42.748034 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-proxy-ca-bundles\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.748195 master-0 kubenswrapper[29458]: I0308 22:14:42.748124 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-serving-cert\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.748195 master-0 kubenswrapper[29458]: I0308 22:14:42.748156 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b6b83f-22a9-445a-9a2c-5521fa7586ee-serving-cert\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.748195 master-0 kubenswrapper[29458]: I0308 22:14:42.748201 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-config\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.748846 master-0 kubenswrapper[29458]: I0308 22:14:42.748237 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4tp\" (UniqueName: \"kubernetes.io/projected/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-kube-api-access-px4tp\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.748846 master-0 kubenswrapper[29458]: I0308 22:14:42.748741 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b6b83f-22a9-445a-9a2c-5521fa7586ee-config\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.748846 master-0 kubenswrapper[29458]: I0308 22:14:42.748795 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-client-ca\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.748846 master-0 kubenswrapper[29458]: I0308 22:14:42.748824 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjfll\" (UniqueName: \"kubernetes.io/projected/16b6b83f-22a9-445a-9a2c-5521fa7586ee-kube-api-access-zjfll\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.751369 master-0 kubenswrapper[29458]: I0308 22:14:42.751033 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-client-ca\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.751813 master-0 kubenswrapper[29458]: I0308 22:14:42.751565 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16b6b83f-22a9-445a-9a2c-5521fa7586ee-client-ca\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.751904 master-0 kubenswrapper[29458]: I0308 22:14:42.751853 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b6b83f-22a9-445a-9a2c-5521fa7586ee-config\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.752525 master-0 kubenswrapper[29458]: I0308 22:14:42.752437 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-config\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.753357 master-0 kubenswrapper[29458]: I0308 22:14:42.752677 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-proxy-ca-bundles\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.754433 master-0 kubenswrapper[29458]: I0308 22:14:42.754359 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b6b83f-22a9-445a-9a2c-5521fa7586ee-serving-cert\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.757148 master-0 kubenswrapper[29458]: I0308 22:14:42.755337 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-serving-cert\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.772499 master-0 kubenswrapper[29458]: I0308 22:14:42.772453 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4tp\" (UniqueName: \"kubernetes.io/projected/566ca59c-54cc-4552-8ce5-2f1c5c40cc7d-kube-api-access-px4tp\") pod \"controller-manager-db5b49478-n22wx\" (UID: \"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d\") " pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.777323 master-0 kubenswrapper[29458]: I0308 22:14:42.777299 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjfll\" (UniqueName: \"kubernetes.io/projected/16b6b83f-22a9-445a-9a2c-5521fa7586ee-kube-api-access-zjfll\") pod \"route-controller-manager-6878946b54-5tglw\" (UID: \"16b6b83f-22a9-445a-9a2c-5521fa7586ee\") " pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.881001 master-0 kubenswrapper[29458]: I0308 22:14:42.880714 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:42.907618 master-0 kubenswrapper[29458]: I0308 22:14:42.907242 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:42.982467 master-0 kubenswrapper[29458]: I0308 22:14:42.982378 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2395900a-ff6b-46ff-92c6-a8a1b5675b67" path="/var/lib/kubelet/pods/2395900a-ff6b-46ff-92c6-a8a1b5675b67/volumes" Mar 08 22:14:42.983617 master-0 kubenswrapper[29458]: I0308 22:14:42.983146 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da51940a-a38f-4baf-9c14-b2f1f46b5aed" path="/var/lib/kubelet/pods/da51940a-a38f-4baf-9c14-b2f1f46b5aed/volumes" Mar 08 22:14:43.056680 master-0 kubenswrapper[29458]: I0308 22:14:43.056588 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19","Type":"ContainerStarted","Data":"3f9ca6c14983c00d385f992936c22af3c101d63487151509576af164ee7412bd"} Mar 08 22:14:43.395965 master-0 kubenswrapper[29458]: I0308 22:14:43.395843 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.395781574 podStartE2EDuration="2.395781574s" podCreationTimestamp="2026-03-08 22:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:43.092629512 +0000 UTC m=+52.380687114" watchObservedRunningTime="2026-03-08 22:14:43.395781574 +0000 UTC m=+52.683839206" Mar 08 22:14:43.396917 master-0 kubenswrapper[29458]: I0308 22:14:43.396858 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-db5b49478-n22wx"] Mar 08 22:14:43.411877 master-0 kubenswrapper[29458]: W0308 22:14:43.411629 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566ca59c_54cc_4552_8ce5_2f1c5c40cc7d.slice/crio-5a3de9e2c2fa56a67b6b7b24f8d3b94bd1d56c0c3258a13d1da14e37cfcbae20 WatchSource:0}: Error finding container 5a3de9e2c2fa56a67b6b7b24f8d3b94bd1d56c0c3258a13d1da14e37cfcbae20: Status 404 returned error can't find the container with id 5a3de9e2c2fa56a67b6b7b24f8d3b94bd1d56c0c3258a13d1da14e37cfcbae20 Mar 08 22:14:43.488022 master-0 kubenswrapper[29458]: I0308 22:14:43.487634 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw"] Mar 08 22:14:43.512308 master-0 kubenswrapper[29458]: W0308 22:14:43.512226 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b6b83f_22a9_445a_9a2c_5521fa7586ee.slice/crio-d6a3bb63611995f2796cabbbe8146618f64ea7c55f0bebba2f5d1dea1752712f WatchSource:0}: Error finding container d6a3bb63611995f2796cabbbe8146618f64ea7c55f0bebba2f5d1dea1752712f: Status 404 returned error can't find the container with id d6a3bb63611995f2796cabbbe8146618f64ea7c55f0bebba2f5d1dea1752712f Mar 08 22:14:44.067589 master-0 kubenswrapper[29458]: I0308 22:14:44.067519 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" event={"ID":"16b6b83f-22a9-445a-9a2c-5521fa7586ee","Type":"ContainerStarted","Data":"f8ad7aea4760ce459676ebcd006ffb9f929bb1971f877f8cb512bfdb8d7d3feb"} Mar 08 22:14:44.069738 master-0 kubenswrapper[29458]: I0308 22:14:44.067604 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" event={"ID":"16b6b83f-22a9-445a-9a2c-5521fa7586ee","Type":"ContainerStarted","Data":"d6a3bb63611995f2796cabbbe8146618f64ea7c55f0bebba2f5d1dea1752712f"} Mar 08 22:14:44.069738 master-0 kubenswrapper[29458]: I0308 22:14:44.067729 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:44.069963 master-0 kubenswrapper[29458]: I0308 22:14:44.069925 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" event={"ID":"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d","Type":"ContainerStarted","Data":"0432b20403b90690d16e0a79ecd9951df91cd0096426b4bc784cfad06ee8a598"} Mar 08 22:14:44.070015 master-0 kubenswrapper[29458]: I0308 22:14:44.069968 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" event={"ID":"566ca59c-54cc-4552-8ce5-2f1c5c40cc7d","Type":"ContainerStarted","Data":"5a3de9e2c2fa56a67b6b7b24f8d3b94bd1d56c0c3258a13d1da14e37cfcbae20"} Mar 08 22:14:44.070318 master-0 kubenswrapper[29458]: I0308 22:14:44.070274 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:44.074479 master-0 kubenswrapper[29458]: I0308 22:14:44.074436 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" Mar 08 22:14:44.111603 master-0 kubenswrapper[29458]: I0308 22:14:44.111512 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" podStartSLOduration=3.111485938 podStartE2EDuration="3.111485938s" podCreationTimestamp="2026-03-08 22:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:44.088260765 +0000 UTC m=+53.376318377" watchObservedRunningTime="2026-03-08 22:14:44.111485938 +0000 UTC m=+53.399543560" Mar 08 22:14:44.111862 master-0 kubenswrapper[29458]: I0308 22:14:44.111821 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-db5b49478-n22wx" podStartSLOduration=3.111811587 podStartE2EDuration="3.111811587s" podCreationTimestamp="2026-03-08 22:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:44.106639346 +0000 UTC m=+53.394696938" watchObservedRunningTime="2026-03-08 22:14:44.111811587 +0000 UTC m=+53.399869219" Mar 08 22:14:44.494218 master-0 kubenswrapper[29458]: I0308 22:14:44.494009 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6878946b54-5tglw" Mar 08 22:14:45.913027 master-0 kubenswrapper[29458]: I0308 22:14:45.912952 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" podUID="2ef568b8-cb7e-47ac-ab67-dc3058c2e374" containerName="oauth-openshift" containerID="cri-o://77dcc71cefdd2f6800116dae9e3186f13736e8f9c3747e3cefc96b68c0027b3f" gracePeriod=15 Mar 08 22:14:46.092914 master-0 kubenswrapper[29458]: I0308 22:14:46.092804 29458 generic.go:334] "Generic (PLEG): container finished" podID="2ef568b8-cb7e-47ac-ab67-dc3058c2e374" containerID="77dcc71cefdd2f6800116dae9e3186f13736e8f9c3747e3cefc96b68c0027b3f" exitCode=0 Mar 08 22:14:46.093246 master-0 kubenswrapper[29458]: I0308 22:14:46.092965 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" event={"ID":"2ef568b8-cb7e-47ac-ab67-dc3058c2e374","Type":"ContainerDied","Data":"77dcc71cefdd2f6800116dae9e3186f13736e8f9c3747e3cefc96b68c0027b3f"} Mar 08 22:14:46.476895 master-0 kubenswrapper[29458]: I0308 22:14:46.476817 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:14:46.518606 master-0 kubenswrapper[29458]: I0308 22:14:46.518528 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-login\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.518606 master-0 kubenswrapper[29458]: I0308 22:14:46.518618 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgc2l\" (UniqueName: \"kubernetes.io/projected/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-kube-api-access-vgc2l\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518641 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-serving-cert\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518675 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-cliconfig\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518700 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-error\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518721 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-router-certs\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518737 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-trusted-ca-bundle\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518756 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-dir\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518794 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-service-ca\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518821 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-session\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518842 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-policies\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518882 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-provider-selection\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.519164 master-0 kubenswrapper[29458]: I0308 22:14:46.518930 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-ocp-branding-template\") pod \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\" (UID: \"2ef568b8-cb7e-47ac-ab67-dc3058c2e374\") " Mar 08 22:14:46.520993 master-0 kubenswrapper[29458]: I0308 22:14:46.520774 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:46.520993 master-0 kubenswrapper[29458]: I0308 22:14:46.520849 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:14:46.520993 master-0 kubenswrapper[29458]: I0308 22:14:46.520872 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:46.521418 master-0 kubenswrapper[29458]: I0308 22:14:46.521284 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:46.521418 master-0 kubenswrapper[29458]: I0308 22:14:46.521256 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:14:46.523373 master-0 kubenswrapper[29458]: I0308 22:14:46.523264 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.524671 master-0 kubenswrapper[29458]: I0308 22:14:46.524610 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.524782 master-0 kubenswrapper[29458]: I0308 22:14:46.524676 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.524987 master-0 kubenswrapper[29458]: I0308 22:14:46.524873 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.525503 master-0 kubenswrapper[29458]: I0308 22:14:46.525393 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.532251 master-0 kubenswrapper[29458]: I0308 22:14:46.526421 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-kube-api-access-vgc2l" (OuterVolumeSpecName: "kube-api-access-vgc2l") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "kube-api-access-vgc2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:14:46.532251 master-0 kubenswrapper[29458]: I0308 22:14:46.526600 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.532537 master-0 kubenswrapper[29458]: I0308 22:14:46.532326 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2ef568b8-cb7e-47ac-ab67-dc3058c2e374" (UID: "2ef568b8-cb7e-47ac-ab67-dc3058c2e374"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:14:46.620816 master-0 kubenswrapper[29458]: I0308 22:14:46.620683 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.620816 master-0 kubenswrapper[29458]: I0308 22:14:46.620807 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.620816 master-0 kubenswrapper[29458]: I0308 22:14:46.620828 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgc2l\" (UniqueName: \"kubernetes.io/projected/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-kube-api-access-vgc2l\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620844 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620862 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620877 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620892 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620906 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620932 29458 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620948 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620961 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620977 29458 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:46.621217 master-0 kubenswrapper[29458]: I0308 22:14:46.620991 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ef568b8-cb7e-47ac-ab67-dc3058c2e374-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 08 22:14:47.102608 master-0 kubenswrapper[29458]: I0308 22:14:47.102503 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" event={"ID":"2ef568b8-cb7e-47ac-ab67-dc3058c2e374","Type":"ContainerDied","Data":"b52d2aa94cdc6ba996dbed1331a5cbf88e8befe42d5cd859c848fd1cea1bc343"} Mar 08 22:14:47.105919 master-0 kubenswrapper[29458]: I0308 22:14:47.102638 29458 scope.go:117] "RemoveContainer" containerID="77dcc71cefdd2f6800116dae9e3186f13736e8f9c3747e3cefc96b68c0027b3f" Mar 08 22:14:47.105919 master-0 kubenswrapper[29458]: I0308 22:14:47.102550 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c47c75-b4x7d" Mar 08 22:14:47.148336 master-0 kubenswrapper[29458]: I0308 22:14:47.148229 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-75c47c75-b4x7d"] Mar 08 22:14:47.153008 master-0 kubenswrapper[29458]: I0308 22:14:47.152893 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-75c47c75-b4x7d"] Mar 08 22:14:48.541120 master-0 kubenswrapper[29458]: I0308 22:14:48.541045 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c84587d9b-7j6cs"] Mar 08 22:14:48.541790 master-0 kubenswrapper[29458]: E0308 22:14:48.541395 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef568b8-cb7e-47ac-ab67-dc3058c2e374" containerName="oauth-openshift" Mar 08 22:14:48.541790 master-0 kubenswrapper[29458]: I0308 22:14:48.541410 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef568b8-cb7e-47ac-ab67-dc3058c2e374" containerName="oauth-openshift" Mar 08 22:14:48.541790 master-0 kubenswrapper[29458]: I0308 22:14:48.541591 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef568b8-cb7e-47ac-ab67-dc3058c2e374" containerName="oauth-openshift" Mar 08 22:14:48.542188 master-0 kubenswrapper[29458]: I0308 22:14:48.542161 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.547187 master-0 kubenswrapper[29458]: I0308 22:14:48.547142 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 22:14:48.547825 master-0 kubenswrapper[29458]: I0308 22:14:48.547785 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 22:14:48.548363 master-0 kubenswrapper[29458]: I0308 22:14:48.548305 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 22:14:48.548423 master-0 kubenswrapper[29458]: I0308 22:14:48.548367 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 22:14:48.548423 master-0 kubenswrapper[29458]: I0308 22:14:48.548403 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 22:14:48.548583 master-0 kubenswrapper[29458]: I0308 22:14:48.548325 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 22:14:48.548632 master-0 kubenswrapper[29458]: I0308 22:14:48.548616 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 22:14:48.548779 master-0 kubenswrapper[29458]: I0308 22:14:48.548750 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 22:14:48.549212 master-0 kubenswrapper[29458]: I0308 22:14:48.549175 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-bk27b" Mar 08 22:14:48.549304 master-0 kubenswrapper[29458]: I0308 22:14:48.549270 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 22:14:48.549489 master-0 kubenswrapper[29458]: I0308 22:14:48.549456 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 22:14:48.549866 master-0 kubenswrapper[29458]: I0308 22:14:48.549834 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 22:14:48.581231 master-0 kubenswrapper[29458]: I0308 22:14:48.571679 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 22:14:48.583744 master-0 kubenswrapper[29458]: I0308 22:14:48.583681 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 22:14:48.638557 master-0 kubenswrapper[29458]: I0308 22:14:48.589620 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c84587d9b-7j6cs"] Mar 08 22:14:48.657248 master-0 kubenswrapper[29458]: I0308 22:14:48.657097 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.657248 master-0 kubenswrapper[29458]: I0308 22:14:48.657165 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-login\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.657487 master-0 kubenswrapper[29458]: I0308 22:14:48.657276 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.657487 master-0 kubenswrapper[29458]: I0308 22:14:48.657417 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.657487 master-0 kubenswrapper[29458]: I0308 22:14:48.657468 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-audit-policies\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.657633 master-0 kubenswrapper[29458]: I0308 22:14:48.657569 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.657633 master-0 kubenswrapper[29458]: I0308 22:14:48.657612 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-error\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.658357 master-0 kubenswrapper[29458]: I0308 22:14:48.657668 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-service-ca\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.659561 master-0 kubenswrapper[29458]: I0308 22:14:48.658730 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.659561 master-0 kubenswrapper[29458]: I0308 22:14:48.658921 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-session\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.659561 master-0 kubenswrapper[29458]: I0308 22:14:48.658953 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20eeccf6-8546-446e-be99-555bcc738272-audit-dir\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.659561 master-0 kubenswrapper[29458]: I0308 22:14:48.658981 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-router-certs\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.659561 master-0 kubenswrapper[29458]: I0308 22:14:48.658997 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64bc\" (UniqueName: \"kubernetes.io/projected/20eeccf6-8546-446e-be99-555bcc738272-kube-api-access-d64bc\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.760996 master-0 kubenswrapper[29458]: I0308 22:14:48.760906 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761323 master-0 kubenswrapper[29458]: I0308 22:14:48.761096 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-login\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761323 master-0 kubenswrapper[29458]: I0308 22:14:48.761131 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761323 master-0 kubenswrapper[29458]: I0308 22:14:48.761250 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761323 master-0 kubenswrapper[29458]: I0308 22:14:48.761289 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-audit-policies\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761457 master-0 kubenswrapper[29458]: I0308 22:14:48.761334 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761457 master-0 kubenswrapper[29458]: I0308 22:14:48.761356 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-error\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.761583 master-0 kubenswrapper[29458]: I0308 22:14:48.761555 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-service-ca\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.762259 master-0 kubenswrapper[29458]: I0308 22:14:48.762056 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.762326 master-0 kubenswrapper[29458]: I0308 22:14:48.762294 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-session\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.762326 master-0 kubenswrapper[29458]: I0308 22:14:48.762316 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20eeccf6-8546-446e-be99-555bcc738272-audit-dir\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.762402 master-0 kubenswrapper[29458]: I0308 22:14:48.762339 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-router-certs\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.762402 master-0 kubenswrapper[29458]: I0308 22:14:48.762361 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d64bc\" (UniqueName: \"kubernetes.io/projected/20eeccf6-8546-446e-be99-555bcc738272-kube-api-access-d64bc\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.764048 master-0 kubenswrapper[29458]: I0308 22:14:48.763213 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-audit-policies\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.764048 master-0 kubenswrapper[29458]: I0308 22:14:48.763349 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20eeccf6-8546-446e-be99-555bcc738272-audit-dir\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.764048 master-0 kubenswrapper[29458]: I0308 22:14:48.763340 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-service-ca\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.764048 master-0 kubenswrapper[29458]: I0308 22:14:48.763529 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.764345 master-0 kubenswrapper[29458]: I0308 22:14:48.764266 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.765250 master-0 kubenswrapper[29458]: I0308 22:14:48.765211 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.765626 master-0 kubenswrapper[29458]: I0308 22:14:48.765590 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.766082 master-0 kubenswrapper[29458]: I0308 22:14:48.765961 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.767659 master-0 kubenswrapper[29458]: I0308 22:14:48.767560 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-error\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.767659 master-0 kubenswrapper[29458]: I0308 22:14:48.767565 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-login\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.768293 master-0 kubenswrapper[29458]: I0308 22:14:48.768255 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-session\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.770342 master-0 kubenswrapper[29458]: I0308 22:14:48.770275 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-router-certs\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.788237 master-0 kubenswrapper[29458]: I0308 22:14:48.788179 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d64bc\" (UniqueName: \"kubernetes.io/projected/20eeccf6-8546-446e-be99-555bcc738272-kube-api-access-d64bc\") pod \"oauth-openshift-c84587d9b-7j6cs\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.868210 master-0 kubenswrapper[29458]: I0308 22:14:48.868143 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:48.990206 master-0 kubenswrapper[29458]: I0308 22:14:48.990137 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef568b8-cb7e-47ac-ab67-dc3058c2e374" path="/var/lib/kubelet/pods/2ef568b8-cb7e-47ac-ab67-dc3058c2e374/volumes" Mar 08 22:14:49.333823 master-0 kubenswrapper[29458]: I0308 22:14:49.333720 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c84587d9b-7j6cs"] Mar 08 22:14:50.130481 master-0 kubenswrapper[29458]: I0308 22:14:50.130391 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" event={"ID":"20eeccf6-8546-446e-be99-555bcc738272","Type":"ContainerStarted","Data":"a6d9f1e11c525793dca2ef77485eed2565fb204e43ed85234cb2499581944f03"} Mar 08 22:14:50.130481 master-0 kubenswrapper[29458]: I0308 22:14:50.130485 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" event={"ID":"20eeccf6-8546-446e-be99-555bcc738272","Type":"ContainerStarted","Data":"20a4b2bfc53e1f0a3c68b7d82be12654f24f7987d924f354f6872a83092ed569"} Mar 08 22:14:50.131492 master-0 kubenswrapper[29458]: I0308 22:14:50.130863 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:50.140264 master-0 kubenswrapper[29458]: I0308 22:14:50.140180 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:14:50.161311 master-0 kubenswrapper[29458]: I0308 22:14:50.161220 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" podStartSLOduration=30.161192594 podStartE2EDuration="30.161192594s" podCreationTimestamp="2026-03-08 22:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:14:50.160312869 +0000 UTC m=+59.448370461" watchObservedRunningTime="2026-03-08 22:14:50.161192594 +0000 UTC m=+59.449250186" Mar 08 22:14:50.964277 master-0 kubenswrapper[29458]: I0308 22:14:50.962952 29458 scope.go:117] "RemoveContainer" containerID="e3a61e0f18998d1659f1848d9ff8c4de1817df1723214bfa069260c375e7739f" Mar 08 22:14:52.255471 master-0 kubenswrapper[29458]: I0308 22:14:52.255374 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 22:15:01.097391 master-0 kubenswrapper[29458]: I0308 22:15:01.097313 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:15:01.102084 master-0 kubenswrapper[29458]: I0308 22:15:01.102010 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 22:15:01.198473 master-0 kubenswrapper[29458]: I0308 22:15:01.198373 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") pod \"1d188983-1f19-4c8e-b604-034bd6308139\" (UID: \"1d188983-1f19-4c8e-b604-034bd6308139\") " Mar 08 22:15:01.202013 master-0 kubenswrapper[29458]: I0308 22:15:01.201943 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1d188983-1f19-4c8e-b604-034bd6308139" (UID: "1d188983-1f19-4c8e-b604-034bd6308139"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:15:01.300675 master-0 kubenswrapper[29458]: I0308 22:15:01.300565 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d188983-1f19-4c8e-b604-034bd6308139-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:20.307387 master-0 kubenswrapper[29458]: I0308 22:15:20.307262 29458 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:15:20.309150 master-0 kubenswrapper[29458]: I0308 22:15:20.309106 29458 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 22:15:20.309354 master-0 kubenswrapper[29458]: I0308 22:15:20.309272 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.309858 master-0 kubenswrapper[29458]: I0308 22:15:20.309794 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" containerID="cri-o://fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd" gracePeriod=15 Mar 08 22:15:20.309942 master-0 kubenswrapper[29458]: I0308 22:15:20.309852 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4" gracePeriod=15 Mar 08 22:15:20.310027 master-0 kubenswrapper[29458]: I0308 22:15:20.309889 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057" gracePeriod=15 Mar 08 22:15:20.310128 master-0 kubenswrapper[29458]: I0308 22:15:20.309945 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7" gracePeriod=15 Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.310323 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d" gracePeriod=15 Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.311365 29458 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: E0308 22:15:20.312466 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.312498 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: E0308 22:15:20.312551 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="setup" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.312567 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="setup" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: E0308 22:15:20.312591 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.312604 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: E0308 22:15:20.312852 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.312871 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: E0308 22:15:20.312944 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.312963 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: E0308 22:15:20.312988 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313030 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313423 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="setup" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313487 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313508 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313525 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313582 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" Mar 08 22:15:20.314414 master-0 kubenswrapper[29458]: I0308 22:15:20.313613 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" Mar 08 22:15:20.466054 master-0 kubenswrapper[29458]: I0308 22:15:20.465988 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.466217 master-0 kubenswrapper[29458]: I0308 22:15:20.466100 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.466271 master-0 kubenswrapper[29458]: I0308 22:15:20.466218 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.466323 master-0 kubenswrapper[29458]: I0308 22:15:20.466269 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.466546 master-0 kubenswrapper[29458]: I0308 22:15:20.466514 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.466608 master-0 kubenswrapper[29458]: I0308 22:15:20.466561 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.466659 master-0 kubenswrapper[29458]: I0308 22:15:20.466610 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.467098 master-0 kubenswrapper[29458]: I0308 22:15:20.467043 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.569536 master-0 kubenswrapper[29458]: I0308 22:15:20.569445 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.569836 master-0 kubenswrapper[29458]: I0308 22:15:20.569544 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.569836 master-0 kubenswrapper[29458]: I0308 22:15:20.569581 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.569836 master-0 kubenswrapper[29458]: I0308 22:15:20.569634 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.569836 master-0 kubenswrapper[29458]: I0308 22:15:20.569755 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.570152 master-0 kubenswrapper[29458]: I0308 22:15:20.569876 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570152 master-0 kubenswrapper[29458]: I0308 22:15:20.569950 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570152 master-0 kubenswrapper[29458]: I0308 22:15:20.570089 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.570152 master-0 kubenswrapper[29458]: I0308 22:15:20.570126 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570441 master-0 kubenswrapper[29458]: I0308 22:15:20.570183 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570441 master-0 kubenswrapper[29458]: I0308 22:15:20.570213 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.570441 master-0 kubenswrapper[29458]: I0308 22:15:20.570230 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570441 master-0 kubenswrapper[29458]: I0308 22:15:20.570219 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570441 master-0 kubenswrapper[29458]: I0308 22:15:20.570327 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:20.570441 master-0 kubenswrapper[29458]: I0308 22:15:20.570442 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:20.570804 master-0 kubenswrapper[29458]: I0308 22:15:20.570531 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:21.401231 master-0 kubenswrapper[29458]: I0308 22:15:21.400114 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_4c3280e9367536f782caf8bdc07edb85/kube-apiserver-cert-syncer/0.log" Mar 08 22:15:21.402549 master-0 kubenswrapper[29458]: I0308 22:15:21.402206 29458 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057" exitCode=0 Mar 08 22:15:21.402549 master-0 kubenswrapper[29458]: I0308 22:15:21.402258 29458 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7" exitCode=0 Mar 08 22:15:21.402549 master-0 kubenswrapper[29458]: I0308 22:15:21.402274 29458 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4" exitCode=0 Mar 08 22:15:21.402549 master-0 kubenswrapper[29458]: I0308 22:15:21.402284 29458 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d" exitCode=2 Mar 08 22:15:26.463238 master-0 kubenswrapper[29458]: I0308 22:15:26.463146 29458 generic.go:334] "Generic (PLEG): container finished" podID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" containerID="3f9ca6c14983c00d385f992936c22af3c101d63487151509576af164ee7412bd" exitCode=0 Mar 08 22:15:26.463238 master-0 kubenswrapper[29458]: I0308 22:15:26.463212 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19","Type":"ContainerDied","Data":"3f9ca6c14983c00d385f992936c22af3c101d63487151509576af164ee7412bd"} Mar 08 22:15:27.873218 master-0 kubenswrapper[29458]: I0308 22:15:27.873174 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:15:28.014914 master-0 kubenswrapper[29458]: I0308 22:15:28.014823 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" (UID: "c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:28.014914 master-0 kubenswrapper[29458]: I0308 22:15:28.014696 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kubelet-dir\") pod \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " Mar 08 22:15:28.015382 master-0 kubenswrapper[29458]: I0308 22:15:28.014947 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-var-lock\") pod \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " Mar 08 22:15:28.015382 master-0 kubenswrapper[29458]: I0308 22:15:28.015009 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-var-lock" (OuterVolumeSpecName: "var-lock") pod "c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" (UID: "c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:28.015514 master-0 kubenswrapper[29458]: I0308 22:15:28.015403 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kube-api-access\") pod \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\" (UID: \"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19\") " Mar 08 22:15:28.015955 master-0 kubenswrapper[29458]: I0308 22:15:28.015912 29458 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:28.015955 master-0 kubenswrapper[29458]: I0308 22:15:28.015936 29458 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:28.020827 master-0 kubenswrapper[29458]: I0308 22:15:28.020734 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" (UID: "c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:15:28.118123 master-0 kubenswrapper[29458]: I0308 22:15:28.117889 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:28.480104 master-0 kubenswrapper[29458]: I0308 22:15:28.479904 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19","Type":"ContainerDied","Data":"0d2280ac16362d670434fdae96e3e2d711c7678f350ddd00219eadd6fdceb1ca"} Mar 08 22:15:28.480104 master-0 kubenswrapper[29458]: I0308 22:15:28.479964 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2280ac16362d670434fdae96e3e2d711c7678f350ddd00219eadd6fdceb1ca" Mar 08 22:15:28.480104 master-0 kubenswrapper[29458]: I0308 22:15:28.480025 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 22:15:28.840724 master-0 kubenswrapper[29458]: E0308 22:15:28.840607 29458 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:15:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:15:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:15:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T22:15:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:28.841944 master-0 kubenswrapper[29458]: E0308 22:15:28.841842 29458 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:28.843183 master-0 kubenswrapper[29458]: E0308 22:15:28.843132 29458 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:28.844002 master-0 kubenswrapper[29458]: E0308 22:15:28.843955 29458 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:28.844659 master-0 kubenswrapper[29458]: E0308 22:15:28.844600 29458 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:28.844659 master-0 kubenswrapper[29458]: E0308 22:15:28.844648 29458 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 22:15:29.304818 master-0 kubenswrapper[29458]: E0308 22:15:29.304701 29458 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:29.305540 master-0 kubenswrapper[29458]: E0308 22:15:29.305453 29458 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:29.306069 master-0 kubenswrapper[29458]: E0308 22:15:29.306016 29458 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:29.308540 master-0 kubenswrapper[29458]: E0308 22:15:29.308493 29458 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:29.309249 master-0 kubenswrapper[29458]: E0308 22:15:29.309208 29458 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:29.309361 master-0 kubenswrapper[29458]: I0308 22:15:29.309251 29458 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 22:15:29.310249 master-0 kubenswrapper[29458]: E0308 22:15:29.310175 29458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 08 22:15:29.389015 master-0 kubenswrapper[29458]: I0308 22:15:29.388983 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_4c3280e9367536f782caf8bdc07edb85/kube-apiserver-cert-syncer/0.log" Mar 08 22:15:29.390400 master-0 kubenswrapper[29458]: I0308 22:15:29.390373 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:29.492900 master-0 kubenswrapper[29458]: I0308 22:15:29.492705 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_4c3280e9367536f782caf8bdc07edb85/kube-apiserver-cert-syncer/0.log" Mar 08 22:15:29.494804 master-0 kubenswrapper[29458]: I0308 22:15:29.493666 29458 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd" exitCode=0 Mar 08 22:15:29.494804 master-0 kubenswrapper[29458]: I0308 22:15:29.493757 29458 scope.go:117] "RemoveContainer" containerID="b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057" Mar 08 22:15:29.494804 master-0 kubenswrapper[29458]: I0308 22:15:29.493956 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:29.512450 master-0 kubenswrapper[29458]: E0308 22:15:29.512368 29458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 08 22:15:29.512839 master-0 kubenswrapper[29458]: I0308 22:15:29.512773 29458 scope.go:117] "RemoveContainer" containerID="87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7" Mar 08 22:15:29.533887 master-0 kubenswrapper[29458]: I0308 22:15:29.533747 29458 scope.go:117] "RemoveContainer" containerID="a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4" Mar 08 22:15:29.553295 master-0 kubenswrapper[29458]: I0308 22:15:29.553255 29458 scope.go:117] "RemoveContainer" containerID="b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d" Mar 08 22:15:29.553971 master-0 kubenswrapper[29458]: I0308 22:15:29.553907 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"4c3280e9367536f782caf8bdc07edb85\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " Mar 08 22:15:29.554047 master-0 kubenswrapper[29458]: I0308 22:15:29.553985 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"4c3280e9367536f782caf8bdc07edb85\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " Mar 08 22:15:29.554104 master-0 kubenswrapper[29458]: I0308 22:15:29.554047 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4c3280e9367536f782caf8bdc07edb85" (UID: "4c3280e9367536f782caf8bdc07edb85"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:29.554104 master-0 kubenswrapper[29458]: I0308 22:15:29.554073 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"4c3280e9367536f782caf8bdc07edb85\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " Mar 08 22:15:29.554175 master-0 kubenswrapper[29458]: I0308 22:15:29.554119 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4c3280e9367536f782caf8bdc07edb85" (UID: "4c3280e9367536f782caf8bdc07edb85"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:29.554175 master-0 kubenswrapper[29458]: I0308 22:15:29.554119 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4c3280e9367536f782caf8bdc07edb85" (UID: "4c3280e9367536f782caf8bdc07edb85"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:29.554439 master-0 kubenswrapper[29458]: I0308 22:15:29.554405 29458 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:29.554439 master-0 kubenswrapper[29458]: I0308 22:15:29.554426 29458 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:29.554439 master-0 kubenswrapper[29458]: I0308 22:15:29.554436 29458 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:29.573243 master-0 kubenswrapper[29458]: I0308 22:15:29.573161 29458 scope.go:117] "RemoveContainer" containerID="fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd" Mar 08 22:15:29.597456 master-0 kubenswrapper[29458]: I0308 22:15:29.597345 29458 scope.go:117] "RemoveContainer" containerID="d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051" Mar 08 22:15:29.624382 master-0 kubenswrapper[29458]: I0308 22:15:29.620463 29458 scope.go:117] "RemoveContainer" containerID="b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057" Mar 08 22:15:29.625336 master-0 kubenswrapper[29458]: E0308 22:15:29.625240 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057\": container with ID starting with b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057 not found: ID does not exist" containerID="b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057" Mar 08 22:15:29.625427 master-0 kubenswrapper[29458]: I0308 22:15:29.625373 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057"} err="failed to get container status \"b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057\": rpc error: code = NotFound desc = could not find container \"b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057\": container with ID starting with b4705c2d549fc59459e0df6c2bc03a0f2277a3053ddf1226d64637e576e24057 not found: ID does not exist" Mar 08 22:15:29.625427 master-0 kubenswrapper[29458]: I0308 22:15:29.625420 29458 scope.go:117] "RemoveContainer" containerID="87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7" Mar 08 22:15:29.626482 master-0 kubenswrapper[29458]: E0308 22:15:29.626231 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7\": container with ID starting with 87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7 not found: ID does not exist" containerID="87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7" Mar 08 22:15:29.626482 master-0 kubenswrapper[29458]: I0308 22:15:29.626321 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7"} err="failed to get container status \"87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7\": rpc error: code = NotFound desc = could not find container \"87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7\": container with ID starting with 87234509224b5aa296c3edb61a3b1a3b781bb8769597d438cf890a47a6ad14f7 not found: ID does not exist" Mar 08 22:15:29.626482 master-0 kubenswrapper[29458]: I0308 22:15:29.626375 29458 scope.go:117] "RemoveContainer" containerID="a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4" Mar 08 22:15:29.627023 master-0 kubenswrapper[29458]: E0308 22:15:29.626975 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4\": container with ID starting with a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4 not found: ID does not exist" containerID="a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4" Mar 08 22:15:29.627098 master-0 kubenswrapper[29458]: I0308 22:15:29.627020 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4"} err="failed to get container status \"a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4\": rpc error: code = NotFound desc = could not find container \"a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4\": container with ID starting with a52268c7a9c6b5185ba5bf5a9ab921c572a0729b52859577eadf1743be8f6fc4 not found: ID does not exist" Mar 08 22:15:29.627098 master-0 kubenswrapper[29458]: I0308 22:15:29.627046 29458 scope.go:117] "RemoveContainer" containerID="b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d" Mar 08 22:15:29.627574 master-0 kubenswrapper[29458]: E0308 22:15:29.627531 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d\": container with ID starting with b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d not found: ID does not exist" containerID="b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d" Mar 08 22:15:29.627620 master-0 kubenswrapper[29458]: I0308 22:15:29.627569 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d"} err="failed to get container status \"b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d\": rpc error: code = NotFound desc = could not find container \"b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d\": container with ID starting with b90ed6d3a021521ef55628a3ddd89c995649bbb6a8fee39458005032f844912d not found: ID does not exist" Mar 08 22:15:29.627620 master-0 kubenswrapper[29458]: I0308 22:15:29.627594 29458 scope.go:117] "RemoveContainer" containerID="fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd" Mar 08 22:15:29.627997 master-0 kubenswrapper[29458]: E0308 22:15:29.627955 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd\": container with ID starting with fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd not found: ID does not exist" containerID="fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd" Mar 08 22:15:29.628043 master-0 kubenswrapper[29458]: I0308 22:15:29.627993 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd"} err="failed to get container status \"fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd\": rpc error: code = NotFound desc = could not find container \"fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd\": container with ID starting with fb46cbac56af10ae89fd868088c513812dc7dc7a80fc88543a41d6671502fafd not found: ID does not exist" Mar 08 22:15:29.628043 master-0 kubenswrapper[29458]: I0308 22:15:29.628027 29458 scope.go:117] "RemoveContainer" containerID="d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051" Mar 08 22:15:29.628426 master-0 kubenswrapper[29458]: E0308 22:15:29.628383 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051\": container with ID starting with d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051 not found: ID does not exist" containerID="d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051" Mar 08 22:15:29.628464 master-0 kubenswrapper[29458]: I0308 22:15:29.628428 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051"} err="failed to get container status \"d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051\": rpc error: code = NotFound desc = could not find container \"d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051\": container with ID starting with d76596264e33079d456422f8118c10602257329cc4dfb420dc8ccdda43115051 not found: ID does not exist" Mar 08 22:15:29.913298 master-0 kubenswrapper[29458]: E0308 22:15:29.913214 29458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 08 22:15:30.454427 master-0 kubenswrapper[29458]: E0308 22:15:30.454317 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:30.455816 master-0 kubenswrapper[29458]: I0308 22:15:30.455055 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:30.488995 master-0 kubenswrapper[29458]: W0308 22:15:30.488875 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899242a15b2bdf3b4a04fb323647ca94.slice/crio-871ba9377f87bd054771dcabc603ce971cbbaeb30e72822fa7fa32bed3154315 WatchSource:0}: Error finding container 871ba9377f87bd054771dcabc603ce971cbbaeb30e72822fa7fa32bed3154315: Status 404 returned error can't find the container with id 871ba9377f87bd054771dcabc603ce971cbbaeb30e72822fa7fa32bed3154315 Mar 08 22:15:30.493323 master-0 kubenswrapper[29458]: E0308 22:15:30.492977 29458 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189afd8039d9a91a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:899242a15b2bdf3b4a04fb323647ca94,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:15:30.491918618 +0000 UTC m=+99.779976250,LastTimestamp:2026-03-08 22:15:30.491918618 +0000 UTC m=+99.779976250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:15:30.504434 master-0 kubenswrapper[29458]: I0308 22:15:30.504337 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"871ba9377f87bd054771dcabc603ce971cbbaeb30e72822fa7fa32bed3154315"} Mar 08 22:15:30.715680 master-0 kubenswrapper[29458]: E0308 22:15:30.715453 29458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 08 22:15:30.984226 master-0 kubenswrapper[29458]: I0308 22:15:30.983944 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3280e9367536f782caf8bdc07edb85" path="/var/lib/kubelet/pods/4c3280e9367536f782caf8bdc07edb85/volumes" Mar 08 22:15:31.000525 master-0 kubenswrapper[29458]: I0308 22:15:31.000398 29458 status_manager.go:851] "Failed to get status for pod" podUID="4c3280e9367536f782caf8bdc07edb85" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:31.008510 master-0 kubenswrapper[29458]: I0308 22:15:31.008419 29458 status_manager.go:851] "Failed to get status for pod" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:31.009250 master-0 kubenswrapper[29458]: I0308 22:15:31.009194 29458 status_manager.go:851] "Failed to get status for pod" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:31.519265 master-0 kubenswrapper[29458]: I0308 22:15:31.519170 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"680bc626daa2c5987ce239ac78852fa737cd8249340056e2004f1c4baeff289f"} Mar 08 22:15:31.520498 master-0 kubenswrapper[29458]: E0308 22:15:31.520431 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:31.520635 master-0 kubenswrapper[29458]: I0308 22:15:31.520559 29458 status_manager.go:851] "Failed to get status for pod" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:32.317796 master-0 kubenswrapper[29458]: E0308 22:15:32.317649 29458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 08 22:15:32.541507 master-0 kubenswrapper[29458]: E0308 22:15:32.541384 29458 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:33.972464 master-0 kubenswrapper[29458]: I0308 22:15:33.972311 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:33.976262 master-0 kubenswrapper[29458]: I0308 22:15:33.976141 29458 status_manager.go:851] "Failed to get status for pod" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:34.009720 master-0 kubenswrapper[29458]: I0308 22:15:34.009549 29458 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:34.009720 master-0 kubenswrapper[29458]: I0308 22:15:34.009668 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:34.011183 master-0 kubenswrapper[29458]: E0308 22:15:34.011101 29458 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:34.011906 master-0 kubenswrapper[29458]: I0308 22:15:34.011861 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:34.041014 master-0 kubenswrapper[29458]: W0308 22:15:34.040207 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077dd10388b9e3e48a07382126e86621.slice/crio-57fe1327b6e1862abe83887094efddda818582e078adab77fab549e26db15192 WatchSource:0}: Error finding container 57fe1327b6e1862abe83887094efddda818582e078adab77fab549e26db15192: Status 404 returned error can't find the container with id 57fe1327b6e1862abe83887094efddda818582e078adab77fab549e26db15192 Mar 08 22:15:34.573221 master-0 kubenswrapper[29458]: I0308 22:15:34.573156 29458 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="ee0c28196844e21679bda67722c5e049056843b699f9cce9ac4380f7095be8ea" exitCode=0 Mar 08 22:15:34.573452 master-0 kubenswrapper[29458]: I0308 22:15:34.573352 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"ee0c28196844e21679bda67722c5e049056843b699f9cce9ac4380f7095be8ea"} Mar 08 22:15:34.573506 master-0 kubenswrapper[29458]: I0308 22:15:34.573449 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"57fe1327b6e1862abe83887094efddda818582e078adab77fab549e26db15192"} Mar 08 22:15:34.573985 master-0 kubenswrapper[29458]: I0308 22:15:34.573949 29458 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:34.573985 master-0 kubenswrapper[29458]: I0308 22:15:34.573982 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:34.575269 master-0 kubenswrapper[29458]: I0308 22:15:34.575217 29458 status_manager.go:851] "Failed to get status for pod" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:34.575360 master-0 kubenswrapper[29458]: E0308 22:15:34.575230 29458 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:34.578590 master-0 kubenswrapper[29458]: I0308 22:15:34.578551 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager/0.log" Mar 08 22:15:34.578669 master-0 kubenswrapper[29458]: I0308 22:15:34.578627 29458 generic.go:334] "Generic (PLEG): container finished" podID="7e4fb17aa6f4ce82697c1badb6e3e623" containerID="045d96fc5260120205fd3f9cca2039678cbcc24c6c931c6bbf3f1ba418756e6c" exitCode=1 Mar 08 22:15:34.578708 master-0 kubenswrapper[29458]: I0308 22:15:34.578669 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerDied","Data":"045d96fc5260120205fd3f9cca2039678cbcc24c6c931c6bbf3f1ba418756e6c"} Mar 08 22:15:34.579215 master-0 kubenswrapper[29458]: I0308 22:15:34.579180 29458 scope.go:117] "RemoveContainer" containerID="045d96fc5260120205fd3f9cca2039678cbcc24c6c931c6bbf3f1ba418756e6c" Mar 08 22:15:34.579974 master-0 kubenswrapper[29458]: I0308 22:15:34.579885 29458 status_manager.go:851] "Failed to get status for pod" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:34.580827 master-0 kubenswrapper[29458]: I0308 22:15:34.580761 29458 status_manager.go:851] "Failed to get status for pod" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 22:15:34.683810 master-0 kubenswrapper[29458]: E0308 22:15:34.683582 29458 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189afd8039d9a91a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:899242a15b2bdf3b4a04fb323647ca94,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 22:15:30.491918618 +0000 UTC m=+99.779976250,LastTimestamp:2026-03-08 22:15:30.491918618 +0000 UTC m=+99.779976250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 22:15:35.593399 master-0 kubenswrapper[29458]: I0308 22:15:35.593326 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"d08717cae79965d2954c6d6186be0913e21355ab1ec5b02c3d79cbe4cd70df25"} Mar 08 22:15:35.593908 master-0 kubenswrapper[29458]: I0308 22:15:35.593413 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"eb750484037e3401725fa640e953b581b7e79af61a28a5c74c2d87b946a269a0"} Mar 08 22:15:35.598565 master-0 kubenswrapper[29458]: I0308 22:15:35.598489 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager/0.log" Mar 08 22:15:35.598717 master-0 kubenswrapper[29458]: I0308 22:15:35.598609 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7e4fb17aa6f4ce82697c1badb6e3e623","Type":"ContainerStarted","Data":"f3f780418e0dc78b1593ce2cd94d46df24ecbd7393affbd8ab7521d75f83183d"} Mar 08 22:15:36.609571 master-0 kubenswrapper[29458]: I0308 22:15:36.609521 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"6bd2bbc56617736182f222892255c3dd7b47f7c8d078f4003f1b158ba0133b66"} Mar 08 22:15:36.609571 master-0 kubenswrapper[29458]: I0308 22:15:36.609568 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"51920d7baa056e9964d5de495b797cffc576999a0aa091b578bcde53d507599f"} Mar 08 22:15:36.610509 master-0 kubenswrapper[29458]: I0308 22:15:36.609578 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"c02d6e4cdc6a62760a2042324c9d26d2ea5a171ae0ab2a3eeb0913ab8fef9de2"} Mar 08 22:15:36.610509 master-0 kubenswrapper[29458]: I0308 22:15:36.609874 29458 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:36.610509 master-0 kubenswrapper[29458]: I0308 22:15:36.609888 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:36.610509 master-0 kubenswrapper[29458]: I0308 22:15:36.610163 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:37.242550 master-0 kubenswrapper[29458]: I0308 22:15:37.242439 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:15:39.013024 master-0 kubenswrapper[29458]: I0308 22:15:39.012896 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:39.013024 master-0 kubenswrapper[29458]: I0308 22:15:39.013001 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:39.027929 master-0 kubenswrapper[29458]: I0308 22:15:39.020609 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:41.839277 master-0 kubenswrapper[29458]: I0308 22:15:41.839204 29458 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:41.902697 master-0 kubenswrapper[29458]: I0308 22:15:41.902575 29458 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="6335ac71-e13d-4e4b-aea1-8cd74140c29f" Mar 08 22:15:42.665596 master-0 kubenswrapper[29458]: I0308 22:15:42.665519 29458 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:42.665596 master-0 kubenswrapper[29458]: I0308 22:15:42.665584 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="05b1a505-45e5-4193-a81e-402fe835e0b7" Mar 08 22:15:42.942985 master-0 kubenswrapper[29458]: I0308 22:15:42.942792 29458 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="6335ac71-e13d-4e4b-aea1-8cd74140c29f" Mar 08 22:15:44.569832 master-0 kubenswrapper[29458]: I0308 22:15:44.569746 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:15:44.574913 master-0 kubenswrapper[29458]: I0308 22:15:44.574852 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:15:44.687487 master-0 kubenswrapper[29458]: I0308 22:15:44.687370 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:15:45.358626 master-0 kubenswrapper[29458]: I0308 22:15:45.358494 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 22:15:45.489807 master-0 kubenswrapper[29458]: I0308 22:15:45.489738 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 22:15:45.490251 master-0 kubenswrapper[29458]: I0308 22:15:45.490222 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 22:15:45.886832 master-0 kubenswrapper[29458]: I0308 22:15:45.886770 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 22:15:45.908763 master-0 kubenswrapper[29458]: I0308 22:15:45.908671 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-tqhmq" Mar 08 22:15:45.957680 master-0 kubenswrapper[29458]: I0308 22:15:45.957597 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-js89c" Mar 08 22:15:46.343814 master-0 kubenswrapper[29458]: I0308 22:15:46.343736 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 22:15:46.519184 master-0 kubenswrapper[29458]: I0308 22:15:46.519104 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 22:15:46.746046 master-0 kubenswrapper[29458]: I0308 22:15:46.745859 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 22:15:46.826806 master-0 kubenswrapper[29458]: I0308 22:15:46.826731 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 22:15:46.828723 master-0 kubenswrapper[29458]: I0308 22:15:46.828645 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 22:15:46.907308 master-0 kubenswrapper[29458]: I0308 22:15:46.907216 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 22:15:46.960858 master-0 kubenswrapper[29458]: I0308 22:15:46.960772 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 22:15:47.329539 master-0 kubenswrapper[29458]: I0308 22:15:47.329449 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 22:15:47.343271 master-0 kubenswrapper[29458]: I0308 22:15:47.342886 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 22:15:47.414058 master-0 kubenswrapper[29458]: I0308 22:15:47.413984 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 22:15:47.499113 master-0 kubenswrapper[29458]: I0308 22:15:47.496581 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 22:15:47.515126 master-0 kubenswrapper[29458]: I0308 22:15:47.515027 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 22:15:47.550462 master-0 kubenswrapper[29458]: I0308 22:15:47.550365 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-4m8r8" Mar 08 22:15:47.616665 master-0 kubenswrapper[29458]: I0308 22:15:47.616384 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-fk6p8" Mar 08 22:15:47.674134 master-0 kubenswrapper[29458]: I0308 22:15:47.673960 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 22:15:47.696049 master-0 kubenswrapper[29458]: I0308 22:15:47.695954 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 22:15:47.699014 master-0 kubenswrapper[29458]: I0308 22:15:47.697793 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 22:15:47.699203 master-0 kubenswrapper[29458]: I0308 22:15:47.699151 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 22:15:47.719459 master-0 kubenswrapper[29458]: I0308 22:15:47.719336 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 22:15:47.784044 master-0 kubenswrapper[29458]: I0308 22:15:47.783970 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 22:15:47.796419 master-0 kubenswrapper[29458]: I0308 22:15:47.796335 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 22:15:47.913943 master-0 kubenswrapper[29458]: I0308 22:15:47.913775 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 22:15:47.921630 master-0 kubenswrapper[29458]: I0308 22:15:47.921578 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 22:15:48.029646 master-0 kubenswrapper[29458]: I0308 22:15:48.029585 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 22:15:48.142059 master-0 kubenswrapper[29458]: I0308 22:15:48.141984 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 22:15:48.142831 master-0 kubenswrapper[29458]: I0308 22:15:48.142792 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 22:15:48.158307 master-0 kubenswrapper[29458]: I0308 22:15:48.158248 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 22:15:48.211853 master-0 kubenswrapper[29458]: I0308 22:15:48.211692 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-b4pnr" Mar 08 22:15:48.248584 master-0 kubenswrapper[29458]: I0308 22:15:48.248499 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 22:15:48.260637 master-0 kubenswrapper[29458]: I0308 22:15:48.260548 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 22:15:48.271899 master-0 kubenswrapper[29458]: I0308 22:15:48.271846 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lmwn6" Mar 08 22:15:48.278493 master-0 kubenswrapper[29458]: I0308 22:15:48.278433 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 22:15:48.339502 master-0 kubenswrapper[29458]: I0308 22:15:48.339379 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 22:15:48.368579 master-0 kubenswrapper[29458]: I0308 22:15:48.368457 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 22:15:48.370876 master-0 kubenswrapper[29458]: I0308 22:15:48.370812 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 22:15:48.406127 master-0 kubenswrapper[29458]: I0308 22:15:48.406029 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 22:15:48.451443 master-0 kubenswrapper[29458]: I0308 22:15:48.451352 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 22:15:48.461672 master-0 kubenswrapper[29458]: I0308 22:15:48.461579 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 22:15:48.595412 master-0 kubenswrapper[29458]: I0308 22:15:48.595305 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 22:15:48.654285 master-0 kubenswrapper[29458]: I0308 22:15:48.654222 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-bk27b" Mar 08 22:15:48.661365 master-0 kubenswrapper[29458]: I0308 22:15:48.661331 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 22:15:48.672476 master-0 kubenswrapper[29458]: I0308 22:15:48.672441 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-wjqj5" Mar 08 22:15:48.678755 master-0 kubenswrapper[29458]: I0308 22:15:48.678696 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 22:15:48.764144 master-0 kubenswrapper[29458]: I0308 22:15:48.763977 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 22:15:48.776127 master-0 kubenswrapper[29458]: I0308 22:15:48.776022 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 22:15:48.818959 master-0 kubenswrapper[29458]: I0308 22:15:48.818873 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 22:15:48.944389 master-0 kubenswrapper[29458]: I0308 22:15:48.944236 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 22:15:49.034099 master-0 kubenswrapper[29458]: I0308 22:15:49.033998 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 22:15:49.034409 master-0 kubenswrapper[29458]: I0308 22:15:49.034348 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 22:15:49.051305 master-0 kubenswrapper[29458]: I0308 22:15:49.051211 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 22:15:49.067314 master-0 kubenswrapper[29458]: I0308 22:15:49.067256 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 22:15:49.144463 master-0 kubenswrapper[29458]: I0308 22:15:49.144341 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 22:15:49.197018 master-0 kubenswrapper[29458]: I0308 22:15:49.196867 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 22:15:49.204999 master-0 kubenswrapper[29458]: I0308 22:15:49.204888 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 22:15:49.234188 master-0 kubenswrapper[29458]: I0308 22:15:49.234035 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 22:15:49.234592 master-0 kubenswrapper[29458]: I0308 22:15:49.234527 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 22:15:49.237831 master-0 kubenswrapper[29458]: I0308 22:15:49.237766 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:15:49.243346 master-0 kubenswrapper[29458]: I0308 22:15:49.243311 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 22:15:49.278714 master-0 kubenswrapper[29458]: I0308 22:15:49.278648 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-cgg74" Mar 08 22:15:49.307582 master-0 kubenswrapper[29458]: I0308 22:15:49.307467 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 22:15:49.326152 master-0 kubenswrapper[29458]: I0308 22:15:49.324202 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 22:15:49.354866 master-0 kubenswrapper[29458]: I0308 22:15:49.354361 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 22:15:49.367842 master-0 kubenswrapper[29458]: I0308 22:15:49.367764 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 22:15:49.411690 master-0 kubenswrapper[29458]: I0308 22:15:49.411622 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 22:15:49.466798 master-0 kubenswrapper[29458]: I0308 22:15:49.465941 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 22:15:49.478390 master-0 kubenswrapper[29458]: I0308 22:15:49.478326 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fnk6l" Mar 08 22:15:49.496910 master-0 kubenswrapper[29458]: I0308 22:15:49.496833 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 22:15:49.505812 master-0 kubenswrapper[29458]: I0308 22:15:49.505757 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 22:15:49.584112 master-0 kubenswrapper[29458]: I0308 22:15:49.584011 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 22:15:49.584904 master-0 kubenswrapper[29458]: I0308 22:15:49.584880 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 22:15:49.625741 master-0 kubenswrapper[29458]: I0308 22:15:49.625371 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-clq2r" Mar 08 22:15:49.626153 master-0 kubenswrapper[29458]: I0308 22:15:49.626039 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 22:15:49.699813 master-0 kubenswrapper[29458]: I0308 22:15:49.699776 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 22:15:49.722107 master-0 kubenswrapper[29458]: I0308 22:15:49.721929 29458 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 22:15:49.734335 master-0 kubenswrapper[29458]: I0308 22:15:49.734301 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 22:15:49.751607 master-0 kubenswrapper[29458]: I0308 22:15:49.751560 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 22:15:49.761154 master-0 kubenswrapper[29458]: I0308 22:15:49.761113 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 22:15:49.779908 master-0 kubenswrapper[29458]: I0308 22:15:49.779871 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 22:15:49.807735 master-0 kubenswrapper[29458]: I0308 22:15:49.807351 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 22:15:49.859129 master-0 kubenswrapper[29458]: I0308 22:15:49.859062 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 22:15:49.872530 master-0 kubenswrapper[29458]: I0308 22:15:49.872492 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 22:15:49.881012 master-0 kubenswrapper[29458]: I0308 22:15:49.880968 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 22:15:49.900568 master-0 kubenswrapper[29458]: I0308 22:15:49.900506 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 22:15:49.907224 master-0 kubenswrapper[29458]: I0308 22:15:49.907118 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 22:15:49.929176 master-0 kubenswrapper[29458]: I0308 22:15:49.927340 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-xjqqb" Mar 08 22:15:49.943222 master-0 kubenswrapper[29458]: I0308 22:15:49.943164 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 22:15:49.952727 master-0 kubenswrapper[29458]: I0308 22:15:49.952671 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 22:15:49.966627 master-0 kubenswrapper[29458]: I0308 22:15:49.966560 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 22:15:50.029766 master-0 kubenswrapper[29458]: I0308 22:15:50.029580 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 22:15:50.064714 master-0 kubenswrapper[29458]: I0308 22:15:50.063755 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 22:15:50.115100 master-0 kubenswrapper[29458]: I0308 22:15:50.115006 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-lvhnl" Mar 08 22:15:50.175119 master-0 kubenswrapper[29458]: I0308 22:15:50.171911 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 22:15:50.191101 master-0 kubenswrapper[29458]: I0308 22:15:50.190851 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 22:15:50.200095 master-0 kubenswrapper[29458]: I0308 22:15:50.195767 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 22:15:50.208094 master-0 kubenswrapper[29458]: I0308 22:15:50.204549 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 22:15:50.212089 master-0 kubenswrapper[29458]: I0308 22:15:50.209956 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 22:15:50.232703 master-0 kubenswrapper[29458]: I0308 22:15:50.232627 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 22:15:50.357903 master-0 kubenswrapper[29458]: I0308 22:15:50.357836 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 22:15:50.383126 master-0 kubenswrapper[29458]: I0308 22:15:50.382985 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 22:15:50.387163 master-0 kubenswrapper[29458]: I0308 22:15:50.387095 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-7kdzp" Mar 08 22:15:50.407151 master-0 kubenswrapper[29458]: I0308 22:15:50.407091 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 22:15:50.415125 master-0 kubenswrapper[29458]: I0308 22:15:50.415043 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 22:15:50.440109 master-0 kubenswrapper[29458]: I0308 22:15:50.440024 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 22:15:50.460557 master-0 kubenswrapper[29458]: I0308 22:15:50.460467 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 22:15:50.467248 master-0 kubenswrapper[29458]: I0308 22:15:50.467178 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 22:15:50.480598 master-0 kubenswrapper[29458]: I0308 22:15:50.480507 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-g8h2t" Mar 08 22:15:50.521673 master-0 kubenswrapper[29458]: I0308 22:15:50.521570 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-t7cwt" Mar 08 22:15:50.567106 master-0 kubenswrapper[29458]: I0308 22:15:50.564531 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 22:15:50.581546 master-0 kubenswrapper[29458]: I0308 22:15:50.581476 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 22:15:50.604427 master-0 kubenswrapper[29458]: I0308 22:15:50.604321 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 22:15:50.618731 master-0 kubenswrapper[29458]: I0308 22:15:50.618551 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 22:15:50.632714 master-0 kubenswrapper[29458]: I0308 22:15:50.632648 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 22:15:50.640004 master-0 kubenswrapper[29458]: I0308 22:15:50.639947 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 22:15:50.644928 master-0 kubenswrapper[29458]: I0308 22:15:50.644867 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-xhdwj" Mar 08 22:15:50.671829 master-0 kubenswrapper[29458]: I0308 22:15:50.671719 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 22:15:50.673616 master-0 kubenswrapper[29458]: I0308 22:15:50.673532 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 22:15:50.673616 master-0 kubenswrapper[29458]: I0308 22:15:50.673580 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 22:15:50.693010 master-0 kubenswrapper[29458]: I0308 22:15:50.692932 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ldwk8" Mar 08 22:15:50.773404 master-0 kubenswrapper[29458]: I0308 22:15:50.773057 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-qdmfw" Mar 08 22:15:50.783810 master-0 kubenswrapper[29458]: I0308 22:15:50.783746 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 08 22:15:50.802640 master-0 kubenswrapper[29458]: I0308 22:15:50.802604 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 22:15:50.865436 master-0 kubenswrapper[29458]: I0308 22:15:50.865349 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 22:15:50.891905 master-0 kubenswrapper[29458]: I0308 22:15:50.891761 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 22:15:50.938241 master-0 kubenswrapper[29458]: I0308 22:15:50.938177 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 22:15:50.942315 master-0 kubenswrapper[29458]: I0308 22:15:50.942263 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-z5x7c" Mar 08 22:15:50.944390 master-0 kubenswrapper[29458]: I0308 22:15:50.944312 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 22:15:50.981185 master-0 kubenswrapper[29458]: I0308 22:15:50.981063 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 22:15:51.002433 master-0 kubenswrapper[29458]: I0308 22:15:51.002231 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 22:15:51.015904 master-0 kubenswrapper[29458]: I0308 22:15:51.015773 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 22:15:51.017413 master-0 kubenswrapper[29458]: I0308 22:15:51.016260 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 22:15:51.029698 master-0 kubenswrapper[29458]: I0308 22:15:51.029622 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 22:15:51.079442 master-0 kubenswrapper[29458]: I0308 22:15:51.079332 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 22:15:51.097488 master-0 kubenswrapper[29458]: I0308 22:15:51.097403 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 22:15:51.105280 master-0 kubenswrapper[29458]: I0308 22:15:51.105204 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 22:15:51.149540 master-0 kubenswrapper[29458]: I0308 22:15:51.142554 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 22:15:51.149540 master-0 kubenswrapper[29458]: I0308 22:15:51.147377 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 22:15:51.169933 master-0 kubenswrapper[29458]: I0308 22:15:51.169800 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 22:15:51.196251 master-0 kubenswrapper[29458]: I0308 22:15:51.196128 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c5hcb" Mar 08 22:15:51.218101 master-0 kubenswrapper[29458]: I0308 22:15:51.217977 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-6d4tw" Mar 08 22:15:51.234915 master-0 kubenswrapper[29458]: I0308 22:15:51.234820 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 22:15:51.303376 master-0 kubenswrapper[29458]: I0308 22:15:51.303270 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 22:15:51.338385 master-0 kubenswrapper[29458]: I0308 22:15:51.338321 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 22:15:51.344159 master-0 kubenswrapper[29458]: I0308 22:15:51.344064 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 22:15:51.352866 master-0 kubenswrapper[29458]: I0308 22:15:51.352812 29458 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 22:15:51.385106 master-0 kubenswrapper[29458]: I0308 22:15:51.385001 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 22:15:51.415630 master-0 kubenswrapper[29458]: I0308 22:15:51.415464 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 22:15:51.416261 master-0 kubenswrapper[29458]: I0308 22:15:51.415764 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-6lw8c" Mar 08 22:15:51.443317 master-0 kubenswrapper[29458]: I0308 22:15:51.443232 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 22:15:51.473542 master-0 kubenswrapper[29458]: I0308 22:15:51.473451 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 22:15:51.492264 master-0 kubenswrapper[29458]: I0308 22:15:51.492173 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 22:15:51.501864 master-0 kubenswrapper[29458]: I0308 22:15:51.501803 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 22:15:51.536206 master-0 kubenswrapper[29458]: I0308 22:15:51.533791 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 22:15:51.554087 master-0 kubenswrapper[29458]: I0308 22:15:51.554006 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 22:15:51.621605 master-0 kubenswrapper[29458]: I0308 22:15:51.621531 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-2sq4s" Mar 08 22:15:51.631935 master-0 kubenswrapper[29458]: I0308 22:15:51.631861 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-hlmng" Mar 08 22:15:51.647623 master-0 kubenswrapper[29458]: I0308 22:15:51.647544 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 22:15:51.654642 master-0 kubenswrapper[29458]: I0308 22:15:51.654544 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 22:15:51.661791 master-0 kubenswrapper[29458]: I0308 22:15:51.661733 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 22:15:51.708696 master-0 kubenswrapper[29458]: I0308 22:15:51.708531 29458 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 22:15:51.727366 master-0 kubenswrapper[29458]: I0308 22:15:51.727269 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 22:15:51.772891 master-0 kubenswrapper[29458]: I0308 22:15:51.772432 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 22:15:51.818628 master-0 kubenswrapper[29458]: I0308 22:15:51.818535 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 22:15:51.859709 master-0 kubenswrapper[29458]: I0308 22:15:51.859375 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 22:15:51.888301 master-0 kubenswrapper[29458]: I0308 22:15:51.886785 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 22:15:51.888745 master-0 kubenswrapper[29458]: I0308 22:15:51.888684 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 22:15:51.922566 master-0 kubenswrapper[29458]: I0308 22:15:51.922503 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 22:15:51.934964 master-0 kubenswrapper[29458]: I0308 22:15:51.934875 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 22:15:51.937123 master-0 kubenswrapper[29458]: I0308 22:15:51.937059 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 22:15:51.938860 master-0 kubenswrapper[29458]: I0308 22:15:51.938833 29458 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 22:15:52.010382 master-0 kubenswrapper[29458]: I0308 22:15:52.010252 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 22:15:52.013974 master-0 kubenswrapper[29458]: I0308 22:15:52.013906 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 08 22:15:52.026283 master-0 kubenswrapper[29458]: I0308 22:15:52.026238 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 22:15:52.038726 master-0 kubenswrapper[29458]: I0308 22:15:52.038592 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 22:15:52.063346 master-0 kubenswrapper[29458]: I0308 22:15:52.063284 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 22:15:52.084209 master-0 kubenswrapper[29458]: I0308 22:15:52.084131 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 22:15:52.102918 master-0 kubenswrapper[29458]: I0308 22:15:52.102854 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 22:15:52.136153 master-0 kubenswrapper[29458]: I0308 22:15:52.136044 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 22:15:52.147142 master-0 kubenswrapper[29458]: I0308 22:15:52.147065 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 22:15:52.157104 master-0 kubenswrapper[29458]: I0308 22:15:52.157046 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 22:15:52.173477 master-0 kubenswrapper[29458]: I0308 22:15:52.173416 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 22:15:52.221568 master-0 kubenswrapper[29458]: I0308 22:15:52.221501 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 22:15:52.227706 master-0 kubenswrapper[29458]: I0308 22:15:52.227660 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 22:15:52.234205 master-0 kubenswrapper[29458]: I0308 22:15:52.234162 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 22:15:52.280959 master-0 kubenswrapper[29458]: I0308 22:15:52.280764 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 22:15:52.312582 master-0 kubenswrapper[29458]: I0308 22:15:52.312520 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 08 22:15:52.326839 master-0 kubenswrapper[29458]: I0308 22:15:52.326461 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 22:15:52.414576 master-0 kubenswrapper[29458]: I0308 22:15:52.414494 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-j75vf" Mar 08 22:15:52.424402 master-0 kubenswrapper[29458]: I0308 22:15:52.424327 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:15:52.426287 master-0 kubenswrapper[29458]: I0308 22:15:52.426219 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 22:15:52.449140 master-0 kubenswrapper[29458]: I0308 22:15:52.449019 29458 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 22:15:52.460905 master-0 kubenswrapper[29458]: I0308 22:15:52.460818 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 22:15:52.461313 master-0 kubenswrapper[29458]: I0308 22:15:52.460928 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 22:15:52.469228 master-0 kubenswrapper[29458]: I0308 22:15:52.469164 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:52.469523 master-0 kubenswrapper[29458]: I0308 22:15:52.469255 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 22:15:52.469784 master-0 kubenswrapper[29458]: I0308 22:15:52.469741 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 22:15:52.483236 master-0 kubenswrapper[29458]: I0308 22:15:52.483169 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 08 22:15:52.494871 master-0 kubenswrapper[29458]: I0308 22:15:52.494799 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 22:15:52.495560 master-0 kubenswrapper[29458]: I0308 22:15:52.495487 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 22:15:52.507313 master-0 kubenswrapper[29458]: I0308 22:15:52.505005 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=11.504977316 podStartE2EDuration="11.504977316s" podCreationTimestamp="2026-03-08 22:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:15:52.498605956 +0000 UTC m=+121.786663578" watchObservedRunningTime="2026-03-08 22:15:52.504977316 +0000 UTC m=+121.793034908" Mar 08 22:15:52.551963 master-0 kubenswrapper[29458]: I0308 22:15:52.551797 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 22:15:52.555758 master-0 kubenswrapper[29458]: I0308 22:15:52.555704 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cfkxm" Mar 08 22:15:52.583116 master-0 kubenswrapper[29458]: I0308 22:15:52.583025 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 22:15:52.585043 master-0 kubenswrapper[29458]: I0308 22:15:52.585002 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 22:15:52.608500 master-0 kubenswrapper[29458]: I0308 22:15:52.608389 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 22:15:52.654328 master-0 kubenswrapper[29458]: I0308 22:15:52.654240 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 22:15:52.706798 master-0 kubenswrapper[29458]: I0308 22:15:52.706735 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 08 22:15:52.732698 master-0 kubenswrapper[29458]: I0308 22:15:52.732632 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 22:15:52.756065 master-0 kubenswrapper[29458]: I0308 22:15:52.756006 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 22:15:52.760707 master-0 kubenswrapper[29458]: I0308 22:15:52.760667 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 22:15:52.790337 master-0 kubenswrapper[29458]: I0308 22:15:52.790275 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 22:15:52.797279 master-0 kubenswrapper[29458]: I0308 22:15:52.797222 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 22:15:52.861725 master-0 kubenswrapper[29458]: I0308 22:15:52.861650 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 22:15:52.925153 master-0 kubenswrapper[29458]: I0308 22:15:52.925024 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 22:15:52.985433 master-0 kubenswrapper[29458]: I0308 22:15:52.985378 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 22:15:53.017618 master-0 kubenswrapper[29458]: I0308 22:15:53.017543 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 22:15:53.024023 master-0 kubenswrapper[29458]: I0308 22:15:53.023934 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-4jq4h" Mar 08 22:15:53.089901 master-0 kubenswrapper[29458]: I0308 22:15:53.089823 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 22:15:53.130678 master-0 kubenswrapper[29458]: I0308 22:15:53.130493 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 22:15:53.137418 master-0 kubenswrapper[29458]: I0308 22:15:53.137348 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 22:15:53.162798 master-0 kubenswrapper[29458]: I0308 22:15:53.162703 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 22:15:53.189943 master-0 kubenswrapper[29458]: I0308 22:15:53.189846 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 22:15:53.223859 master-0 kubenswrapper[29458]: I0308 22:15:53.223782 29458 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 22:15:53.224179 master-0 kubenswrapper[29458]: I0308 22:15:53.224107 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://680bc626daa2c5987ce239ac78852fa737cd8249340056e2004f1c4baeff289f" gracePeriod=5 Mar 08 22:15:53.230662 master-0 kubenswrapper[29458]: I0308 22:15:53.230604 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 22:15:53.268869 master-0 kubenswrapper[29458]: I0308 22:15:53.268795 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 22:15:53.291681 master-0 kubenswrapper[29458]: I0308 22:15:53.291424 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-jfbvc" Mar 08 22:15:53.320021 master-0 kubenswrapper[29458]: I0308 22:15:53.319956 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 22:15:53.320911 master-0 kubenswrapper[29458]: I0308 22:15:53.320840 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 08 22:15:53.348640 master-0 kubenswrapper[29458]: I0308 22:15:53.348566 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 08 22:15:53.357635 master-0 kubenswrapper[29458]: I0308 22:15:53.357574 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 22:15:53.427299 master-0 kubenswrapper[29458]: I0308 22:15:53.427119 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 22:15:53.537136 master-0 kubenswrapper[29458]: I0308 22:15:53.537012 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 22:15:53.544959 master-0 kubenswrapper[29458]: I0308 22:15:53.544896 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 22:15:53.688507 master-0 kubenswrapper[29458]: I0308 22:15:53.688272 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 22:15:53.701220 master-0 kubenswrapper[29458]: I0308 22:15:53.701132 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 22:15:53.770986 master-0 kubenswrapper[29458]: I0308 22:15:53.770763 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 22:15:53.804132 master-0 kubenswrapper[29458]: I0308 22:15:53.803937 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 22:15:53.840524 master-0 kubenswrapper[29458]: I0308 22:15:53.840448 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 22:15:53.853328 master-0 kubenswrapper[29458]: I0308 22:15:53.853261 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 22:15:53.885284 master-0 kubenswrapper[29458]: I0308 22:15:53.885186 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 22:15:54.020975 master-0 kubenswrapper[29458]: I0308 22:15:54.020594 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 22:15:54.028149 master-0 kubenswrapper[29458]: I0308 22:15:54.028057 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 22:15:54.070621 master-0 kubenswrapper[29458]: I0308 22:15:54.070551 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 22:15:54.101579 master-0 kubenswrapper[29458]: I0308 22:15:54.101522 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 22:15:54.192247 master-0 kubenswrapper[29458]: I0308 22:15:54.192171 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 08 22:15:54.195378 master-0 kubenswrapper[29458]: I0308 22:15:54.195312 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 22:15:54.196209 master-0 kubenswrapper[29458]: I0308 22:15:54.195391 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 22:15:54.239578 master-0 kubenswrapper[29458]: I0308 22:15:54.239494 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 22:15:54.289769 master-0 kubenswrapper[29458]: I0308 22:15:54.289546 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 22:15:54.302152 master-0 kubenswrapper[29458]: I0308 22:15:54.302049 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 22:15:54.303411 master-0 kubenswrapper[29458]: I0308 22:15:54.303365 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 22:15:54.315054 master-0 kubenswrapper[29458]: I0308 22:15:54.314996 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 22:15:54.425524 master-0 kubenswrapper[29458]: I0308 22:15:54.425427 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 22:15:54.499102 master-0 kubenswrapper[29458]: I0308 22:15:54.498963 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 22:15:54.598186 master-0 kubenswrapper[29458]: I0308 22:15:54.597369 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 22:15:54.599121 master-0 kubenswrapper[29458]: I0308 22:15:54.598695 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 22:15:54.606883 master-0 kubenswrapper[29458]: I0308 22:15:54.606802 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 22:15:54.649275 master-0 kubenswrapper[29458]: I0308 22:15:54.648906 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 22:15:54.718899 master-0 kubenswrapper[29458]: I0308 22:15:54.718826 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 22:15:54.964971 master-0 kubenswrapper[29458]: I0308 22:15:54.964809 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 22:15:55.069321 master-0 kubenswrapper[29458]: I0308 22:15:55.066164 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 22:15:55.075770 master-0 kubenswrapper[29458]: I0308 22:15:55.075578 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 22:15:55.110360 master-0 kubenswrapper[29458]: I0308 22:15:55.110257 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 22:15:55.128938 master-0 kubenswrapper[29458]: I0308 22:15:55.128855 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 22:15:55.203918 master-0 kubenswrapper[29458]: I0308 22:15:55.203830 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mqlfp" Mar 08 22:15:55.232578 master-0 kubenswrapper[29458]: I0308 22:15:55.232416 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 22:15:55.315150 master-0 kubenswrapper[29458]: I0308 22:15:55.315047 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 22:15:55.356807 master-0 kubenswrapper[29458]: I0308 22:15:55.356674 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 22:15:55.513056 master-0 kubenswrapper[29458]: I0308 22:15:55.512844 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 22:15:55.603607 master-0 kubenswrapper[29458]: I0308 22:15:55.603480 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4ct7k" Mar 08 22:15:55.737237 master-0 kubenswrapper[29458]: I0308 22:15:55.737127 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 22:15:55.839254 master-0 kubenswrapper[29458]: I0308 22:15:55.839150 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dv1om8r64ct8c" Mar 08 22:15:56.057108 master-0 kubenswrapper[29458]: I0308 22:15:56.057000 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 22:15:56.096294 master-0 kubenswrapper[29458]: I0308 22:15:56.096062 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 22:15:57.757718 master-0 kubenswrapper[29458]: I0308 22:15:57.757662 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 22:15:58.803223 master-0 kubenswrapper[29458]: I0308 22:15:58.803166 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 08 22:15:58.803786 master-0 kubenswrapper[29458]: I0308 22:15:58.803226 29458 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="680bc626daa2c5987ce239ac78852fa737cd8249340056e2004f1c4baeff289f" exitCode=137 Mar 08 22:15:58.803786 master-0 kubenswrapper[29458]: I0308 22:15:58.803273 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="871ba9377f87bd054771dcabc603ce971cbbaeb30e72822fa7fa32bed3154315" Mar 08 22:15:58.830716 master-0 kubenswrapper[29458]: I0308 22:15:58.830657 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 08 22:15:58.830976 master-0 kubenswrapper[29458]: I0308 22:15:58.830788 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:15:58.875101 master-0 kubenswrapper[29458]: I0308 22:15:58.875004 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 22:15:58.875472 master-0 kubenswrapper[29458]: I0308 22:15:58.875177 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 22:15:58.875472 master-0 kubenswrapper[29458]: I0308 22:15:58.875218 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 22:15:58.875472 master-0 kubenswrapper[29458]: I0308 22:15:58.875264 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:58.875472 master-0 kubenswrapper[29458]: I0308 22:15:58.875334 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 22:15:58.875472 master-0 kubenswrapper[29458]: I0308 22:15:58.875343 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:58.875472 master-0 kubenswrapper[29458]: I0308 22:15:58.875429 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:58.875746 master-0 kubenswrapper[29458]: I0308 22:15:58.875375 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 22:15:58.875746 master-0 kubenswrapper[29458]: I0308 22:15:58.875502 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:58.875903 master-0 kubenswrapper[29458]: I0308 22:15:58.875868 29458 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:58.875957 master-0 kubenswrapper[29458]: I0308 22:15:58.875900 29458 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:58.875957 master-0 kubenswrapper[29458]: I0308 22:15:58.875919 29458 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:58.875957 master-0 kubenswrapper[29458]: I0308 22:15:58.875936 29458 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:58.882481 master-0 kubenswrapper[29458]: I0308 22:15:58.882406 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:15:58.977586 master-0 kubenswrapper[29458]: I0308 22:15:58.977507 29458 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:15:58.984311 master-0 kubenswrapper[29458]: I0308 22:15:58.984228 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 08 22:15:59.810956 master-0 kubenswrapper[29458]: I0308 22:15:59.810860 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 22:16:08.560430 master-0 kubenswrapper[29458]: I0308 22:16:08.560340 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-5xtm2"] Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: E0308 22:16:08.560641 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: I0308 22:16:08.560655 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: E0308 22:16:08.560675 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" containerName="installer" Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: I0308 22:16:08.560682 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" containerName="installer" Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: I0308 22:16:08.560827 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: I0308 22:16:08.560900 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c0c0ca-e204-42f5-a3fe-79ca7d9c2d19" containerName="installer" Mar 08 22:16:08.561675 master-0 kubenswrapper[29458]: I0308 22:16:08.561360 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.563595 master-0 kubenswrapper[29458]: I0308 22:16:08.563394 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-s6bt6" Mar 08 22:16:08.563595 master-0 kubenswrapper[29458]: I0308 22:16:08.563442 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 08 22:16:08.566551 master-0 kubenswrapper[29458]: I0308 22:16:08.564108 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 08 22:16:08.566551 master-0 kubenswrapper[29458]: I0308 22:16:08.564112 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 08 22:16:08.566551 master-0 kubenswrapper[29458]: I0308 22:16:08.564726 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 08 22:16:08.573119 master-0 kubenswrapper[29458]: I0308 22:16:08.571601 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 08 22:16:08.596302 master-0 kubenswrapper[29458]: I0308 22:16:08.596225 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-5xtm2"] Mar 08 22:16:08.638750 master-0 kubenswrapper[29458]: I0308 22:16:08.638673 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547efe6c-f422-42a9-8db9-7e4e0620c952-config\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.638750 master-0 kubenswrapper[29458]: I0308 22:16:08.638742 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/547efe6c-f422-42a9-8db9-7e4e0620c952-trusted-ca\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.639056 master-0 kubenswrapper[29458]: I0308 22:16:08.638774 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547efe6c-f422-42a9-8db9-7e4e0620c952-serving-cert\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.639056 master-0 kubenswrapper[29458]: I0308 22:16:08.638824 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xnfx\" (UniqueName: \"kubernetes.io/projected/547efe6c-f422-42a9-8db9-7e4e0620c952-kube-api-access-8xnfx\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.664967 master-0 kubenswrapper[29458]: I0308 22:16:08.662455 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-npp24"] Mar 08 22:16:08.664967 master-0 kubenswrapper[29458]: I0308 22:16:08.663295 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.665776 master-0 kubenswrapper[29458]: I0308 22:16:08.665699 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 22:16:08.666443 master-0 kubenswrapper[29458]: I0308 22:16:08.665944 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-w9zsz" Mar 08 22:16:08.666443 master-0 kubenswrapper[29458]: I0308 22:16:08.666121 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 22:16:08.666443 master-0 kubenswrapper[29458]: I0308 22:16:08.666279 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 22:16:08.677137 master-0 kubenswrapper[29458]: I0308 22:16:08.676978 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-npp24"] Mar 08 22:16:08.740378 master-0 kubenswrapper[29458]: I0308 22:16:08.740286 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xnfx\" (UniqueName: \"kubernetes.io/projected/547efe6c-f422-42a9-8db9-7e4e0620c952-kube-api-access-8xnfx\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.740378 master-0 kubenswrapper[29458]: I0308 22:16:08.740373 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196-cert\") pod \"ingress-canary-npp24\" (UID: \"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196\") " pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.740787 master-0 kubenswrapper[29458]: I0308 22:16:08.740458 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547efe6c-f422-42a9-8db9-7e4e0620c952-config\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.740787 master-0 kubenswrapper[29458]: I0308 22:16:08.740485 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlppc\" (UniqueName: \"kubernetes.io/projected/3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196-kube-api-access-vlppc\") pod \"ingress-canary-npp24\" (UID: \"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196\") " pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.740787 master-0 kubenswrapper[29458]: I0308 22:16:08.740511 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/547efe6c-f422-42a9-8db9-7e4e0620c952-trusted-ca\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.740787 master-0 kubenswrapper[29458]: I0308 22:16:08.740620 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547efe6c-f422-42a9-8db9-7e4e0620c952-serving-cert\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.741400 master-0 kubenswrapper[29458]: I0308 22:16:08.741361 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547efe6c-f422-42a9-8db9-7e4e0620c952-config\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.742568 master-0 kubenswrapper[29458]: I0308 22:16:08.742530 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/547efe6c-f422-42a9-8db9-7e4e0620c952-trusted-ca\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.750142 master-0 kubenswrapper[29458]: I0308 22:16:08.745769 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547efe6c-f422-42a9-8db9-7e4e0620c952-serving-cert\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.764114 master-0 kubenswrapper[29458]: I0308 22:16:08.764028 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xnfx\" (UniqueName: \"kubernetes.io/projected/547efe6c-f422-42a9-8db9-7e4e0620c952-kube-api-access-8xnfx\") pod \"console-operator-6c7fb6b958-5xtm2\" (UID: \"547efe6c-f422-42a9-8db9-7e4e0620c952\") " pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:08.842733 master-0 kubenswrapper[29458]: I0308 22:16:08.842666 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196-cert\") pod \"ingress-canary-npp24\" (UID: \"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196\") " pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.842733 master-0 kubenswrapper[29458]: I0308 22:16:08.842928 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlppc\" (UniqueName: \"kubernetes.io/projected/3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196-kube-api-access-vlppc\") pod \"ingress-canary-npp24\" (UID: \"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196\") " pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.846284 master-0 kubenswrapper[29458]: I0308 22:16:08.846247 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196-cert\") pod \"ingress-canary-npp24\" (UID: \"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196\") " pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.861692 master-0 kubenswrapper[29458]: I0308 22:16:08.861624 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlppc\" (UniqueName: \"kubernetes.io/projected/3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196-kube-api-access-vlppc\") pod \"ingress-canary-npp24\" (UID: \"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196\") " pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:08.881803 master-0 kubenswrapper[29458]: I0308 22:16:08.881743 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:09.002242 master-0 kubenswrapper[29458]: I0308 22:16:09.000863 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-npp24" Mar 08 22:16:09.385250 master-0 kubenswrapper[29458]: I0308 22:16:09.385115 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-5xtm2"] Mar 08 22:16:09.499100 master-0 kubenswrapper[29458]: I0308 22:16:09.499033 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-npp24"] Mar 08 22:16:09.895700 master-0 kubenswrapper[29458]: I0308 22:16:09.895517 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" event={"ID":"547efe6c-f422-42a9-8db9-7e4e0620c952","Type":"ContainerStarted","Data":"bc5f7a8cfc0a2fa1398e520a59079aa3af0a7ddd6ed8e3bbd6b54c67a30bb862"} Mar 08 22:16:09.899914 master-0 kubenswrapper[29458]: I0308 22:16:09.899857 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-npp24" event={"ID":"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196","Type":"ContainerStarted","Data":"8f97c346cd8e60c3d9ee2297c78ff6eedfa59d8d594c86f98a9a7cd30f205132"} Mar 08 22:16:09.899914 master-0 kubenswrapper[29458]: I0308 22:16:09.899918 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-npp24" event={"ID":"3a67ce4f-b5e5-4a5e-bac2-7bdfd3916196","Type":"ContainerStarted","Data":"e00176360ff2e5619f5eb248a3bd13c2434c77c32b313e5ed1e28832ec5d9352"} Mar 08 22:16:09.926113 master-0 kubenswrapper[29458]: I0308 22:16:09.923253 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-npp24" podStartSLOduration=1.923230022 podStartE2EDuration="1.923230022s" podCreationTimestamp="2026-03-08 22:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:16:09.921707129 +0000 UTC m=+139.209764711" watchObservedRunningTime="2026-03-08 22:16:09.923230022 +0000 UTC m=+139.211287614" Mar 08 22:16:12.933174 master-0 kubenswrapper[29458]: I0308 22:16:12.931902 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" event={"ID":"547efe6c-f422-42a9-8db9-7e4e0620c952","Type":"ContainerStarted","Data":"79c987a6bd5e37436e06d07a5440a42dbde57a6ff36ef8a77a15ce184f0f5afa"} Mar 08 22:16:12.939277 master-0 kubenswrapper[29458]: I0308 22:16:12.933414 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:12.951052 master-0 kubenswrapper[29458]: I0308 22:16:12.950215 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" Mar 08 22:16:12.967477 master-0 kubenswrapper[29458]: I0308 22:16:12.967320 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-5xtm2" podStartSLOduration=2.047430896 podStartE2EDuration="4.967284467s" podCreationTimestamp="2026-03-08 22:16:08 +0000 UTC" firstStartedPulling="2026-03-08 22:16:09.398419312 +0000 UTC m=+138.686476914" lastFinishedPulling="2026-03-08 22:16:12.318272873 +0000 UTC m=+141.606330485" observedRunningTime="2026-03-08 22:16:12.956939943 +0000 UTC m=+142.244997545" watchObservedRunningTime="2026-03-08 22:16:12.967284467 +0000 UTC m=+142.255342049" Mar 08 22:16:13.115954 master-0 kubenswrapper[29458]: I0308 22:16:13.115890 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-9lgd2"] Mar 08 22:16:13.117394 master-0 kubenswrapper[29458]: I0308 22:16:13.117373 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:16:13.121029 master-0 kubenswrapper[29458]: I0308 22:16:13.120972 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 08 22:16:13.121188 master-0 kubenswrapper[29458]: I0308 22:16:13.121144 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pjrpg" Mar 08 22:16:13.132089 master-0 kubenswrapper[29458]: I0308 22:16:13.128818 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 08 22:16:13.138562 master-0 kubenswrapper[29458]: I0308 22:16:13.137630 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-9lgd2"] Mar 08 22:16:13.235447 master-0 kubenswrapper[29458]: I0308 22:16:13.235251 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfd6\" (UniqueName: \"kubernetes.io/projected/a43ce821-0c23-47fd-8c58-67152d39c232-kube-api-access-xxfd6\") pod \"downloads-84f57b9877-9lgd2\" (UID: \"a43ce821-0c23-47fd-8c58-67152d39c232\") " pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:16:13.337453 master-0 kubenswrapper[29458]: I0308 22:16:13.337336 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxfd6\" (UniqueName: \"kubernetes.io/projected/a43ce821-0c23-47fd-8c58-67152d39c232-kube-api-access-xxfd6\") pod \"downloads-84f57b9877-9lgd2\" (UID: \"a43ce821-0c23-47fd-8c58-67152d39c232\") " pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:16:13.354698 master-0 kubenswrapper[29458]: I0308 22:16:13.354631 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxfd6\" (UniqueName: \"kubernetes.io/projected/a43ce821-0c23-47fd-8c58-67152d39c232-kube-api-access-xxfd6\") pod \"downloads-84f57b9877-9lgd2\" (UID: \"a43ce821-0c23-47fd-8c58-67152d39c232\") " pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:16:13.450582 master-0 kubenswrapper[29458]: I0308 22:16:13.450461 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:16:14.035503 master-0 kubenswrapper[29458]: I0308 22:16:14.035409 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-9lgd2"] Mar 08 22:16:14.043480 master-0 kubenswrapper[29458]: W0308 22:16:14.043421 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda43ce821_0c23_47fd_8c58_67152d39c232.slice/crio-709ed38fab8ea547db70e1a38c12c1f95057b0b0132f5fea54ba488f33c92a7c WatchSource:0}: Error finding container 709ed38fab8ea547db70e1a38c12c1f95057b0b0132f5fea54ba488f33c92a7c: Status 404 returned error can't find the container with id 709ed38fab8ea547db70e1a38c12c1f95057b0b0132f5fea54ba488f33c92a7c Mar 08 22:16:14.957439 master-0 kubenswrapper[29458]: I0308 22:16:14.957344 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-9lgd2" event={"ID":"a43ce821-0c23-47fd-8c58-67152d39c232","Type":"ContainerStarted","Data":"709ed38fab8ea547db70e1a38c12c1f95057b0b0132f5fea54ba488f33c92a7c"} Mar 08 22:16:19.797327 master-0 kubenswrapper[29458]: I0308 22:16:19.795740 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6cb77ccb47-w9dnw"] Mar 08 22:16:19.797327 master-0 kubenswrapper[29458]: I0308 22:16:19.796968 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.801566 master-0 kubenswrapper[29458]: I0308 22:16:19.801532 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-hxnnm" Mar 08 22:16:19.801643 master-0 kubenswrapper[29458]: I0308 22:16:19.801562 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 08 22:16:19.801962 master-0 kubenswrapper[29458]: I0308 22:16:19.801932 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 08 22:16:19.802123 master-0 kubenswrapper[29458]: I0308 22:16:19.802097 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 08 22:16:19.802858 master-0 kubenswrapper[29458]: I0308 22:16:19.802820 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 08 22:16:19.803031 master-0 kubenswrapper[29458]: I0308 22:16:19.802999 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 08 22:16:19.820479 master-0 kubenswrapper[29458]: I0308 22:16:19.820401 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6cb77ccb47-w9dnw"] Mar 08 22:16:19.858408 master-0 kubenswrapper[29458]: I0308 22:16:19.858021 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-oauth-config\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.858408 master-0 kubenswrapper[29458]: I0308 22:16:19.858419 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxd2\" (UniqueName: \"kubernetes.io/projected/a7c39000-e378-449c-b387-249518e9a1e9-kube-api-access-8sxd2\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.858721 master-0 kubenswrapper[29458]: I0308 22:16:19.858454 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-oauth-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.858721 master-0 kubenswrapper[29458]: I0308 22:16:19.858510 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.858721 master-0 kubenswrapper[29458]: I0308 22:16:19.858531 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-console-config\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.858721 master-0 kubenswrapper[29458]: I0308 22:16:19.858579 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-service-ca\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.960353 master-0 kubenswrapper[29458]: I0308 22:16:19.960290 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-oauth-config\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.960353 master-0 kubenswrapper[29458]: I0308 22:16:19.960361 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxd2\" (UniqueName: \"kubernetes.io/projected/a7c39000-e378-449c-b387-249518e9a1e9-kube-api-access-8sxd2\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.960353 master-0 kubenswrapper[29458]: I0308 22:16:19.960397 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-oauth-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.960779 master-0 kubenswrapper[29458]: I0308 22:16:19.960454 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.960779 master-0 kubenswrapper[29458]: I0308 22:16:19.960483 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-console-config\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.960779 master-0 kubenswrapper[29458]: I0308 22:16:19.960631 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-service-ca\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.961559 master-0 kubenswrapper[29458]: I0308 22:16:19.961531 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-service-ca\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.961653 master-0 kubenswrapper[29458]: E0308 22:16:19.961624 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:19.961695 master-0 kubenswrapper[29458]: E0308 22:16:19.961683 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:16:20.461666559 +0000 UTC m=+149.749724151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:19.962536 master-0 kubenswrapper[29458]: I0308 22:16:19.962498 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-oauth-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.962685 master-0 kubenswrapper[29458]: I0308 22:16:19.962655 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-console-config\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.965375 master-0 kubenswrapper[29458]: I0308 22:16:19.965317 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-oauth-config\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:19.981779 master-0 kubenswrapper[29458]: I0308 22:16:19.981728 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxd2\" (UniqueName: \"kubernetes.io/projected/a7c39000-e378-449c-b387-249518e9a1e9-kube-api-access-8sxd2\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:20.469730 master-0 kubenswrapper[29458]: I0308 22:16:20.469612 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:20.470262 master-0 kubenswrapper[29458]: E0308 22:16:20.470050 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:20.470262 master-0 kubenswrapper[29458]: E0308 22:16:20.470222 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:16:21.470186406 +0000 UTC m=+150.758244028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:21.489021 master-0 kubenswrapper[29458]: I0308 22:16:21.488771 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:21.489956 master-0 kubenswrapper[29458]: E0308 22:16:21.488932 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:21.489956 master-0 kubenswrapper[29458]: E0308 22:16:21.489142 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:16:23.489119855 +0000 UTC m=+152.777177447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:22.597325 master-0 kubenswrapper[29458]: I0308 22:16:22.597124 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-798c96757f-zln5h"] Mar 08 22:16:22.600059 master-0 kubenswrapper[29458]: I0308 22:16:22.599985 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.606227 master-0 kubenswrapper[29458]: I0308 22:16:22.605263 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-798c96757f-zln5h"] Mar 08 22:16:22.611832 master-0 kubenswrapper[29458]: I0308 22:16:22.611767 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 08 22:16:22.713553 master-0 kubenswrapper[29458]: I0308 22:16:22.713472 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-trusted-ca-bundle\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.718407 master-0 kubenswrapper[29458]: I0308 22:16:22.714201 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-config\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.718407 master-0 kubenswrapper[29458]: I0308 22:16:22.718061 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-oauth-config\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.718407 master-0 kubenswrapper[29458]: I0308 22:16:22.718177 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-service-ca\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.718407 master-0 kubenswrapper[29458]: I0308 22:16:22.718281 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.718407 master-0 kubenswrapper[29458]: I0308 22:16:22.718348 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqlz5\" (UniqueName: \"kubernetes.io/projected/477076a3-21e3-4a37-a442-54cd4d4ff12e-kube-api-access-fqlz5\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.719378 master-0 kubenswrapper[29458]: I0308 22:16:22.718455 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-oauth-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.820104 master-0 kubenswrapper[29458]: I0308 22:16:22.820000 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.820352 master-0 kubenswrapper[29458]: I0308 22:16:22.820135 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqlz5\" (UniqueName: \"kubernetes.io/projected/477076a3-21e3-4a37-a442-54cd4d4ff12e-kube-api-access-fqlz5\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.820352 master-0 kubenswrapper[29458]: E0308 22:16:22.820313 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:22.820428 master-0 kubenswrapper[29458]: E0308 22:16:22.820400 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:16:23.320380456 +0000 UTC m=+152.608438048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.820519 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-oauth-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.820651 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-trusted-ca-bundle\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.820717 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-config\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.820743 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-oauth-config\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.820768 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-service-ca\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.821503 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-service-ca\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.821789 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-oauth-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.822270 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-trusted-ca-bundle\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.823917 master-0 kubenswrapper[29458]: I0308 22:16:22.822374 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-config\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.825515 master-0 kubenswrapper[29458]: I0308 22:16:22.823947 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-oauth-config\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:22.837891 master-0 kubenswrapper[29458]: I0308 22:16:22.837846 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqlz5\" (UniqueName: \"kubernetes.io/projected/477076a3-21e3-4a37-a442-54cd4d4ff12e-kube-api-access-fqlz5\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:23.336631 master-0 kubenswrapper[29458]: I0308 22:16:23.336536 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:23.338037 master-0 kubenswrapper[29458]: E0308 22:16:23.337941 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:23.338284 master-0 kubenswrapper[29458]: E0308 22:16:23.338154 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:16:24.338120625 +0000 UTC m=+153.626178237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:16:23.540730 master-0 kubenswrapper[29458]: I0308 22:16:23.540605 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:23.541063 master-0 kubenswrapper[29458]: E0308 22:16:23.541035 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:23.541169 master-0 kubenswrapper[29458]: E0308 22:16:23.541155 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:16:27.541128425 +0000 UTC m=+156.829186047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:24.355543 master-0 kubenswrapper[29458]: I0308 22:16:24.355369 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:24.356336 master-0 kubenswrapper[29458]: E0308 22:16:24.355572 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:24.356336 master-0 kubenswrapper[29458]: E0308 22:16:24.355688 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:16:26.355662124 +0000 UTC m=+155.643719716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:16:26.397627 master-0 kubenswrapper[29458]: I0308 22:16:26.396617 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:26.397627 master-0 kubenswrapper[29458]: E0308 22:16:26.397006 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:26.397627 master-0 kubenswrapper[29458]: E0308 22:16:26.397105 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:16:30.397057312 +0000 UTC m=+159.685114934 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:16:27.617768 master-0 kubenswrapper[29458]: I0308 22:16:27.617684 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:27.618690 master-0 kubenswrapper[29458]: E0308 22:16:27.617859 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:27.618690 master-0 kubenswrapper[29458]: E0308 22:16:27.617917 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:16:35.617902099 +0000 UTC m=+164.905959691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:30.473970 master-0 kubenswrapper[29458]: I0308 22:16:30.473845 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:30.474758 master-0 kubenswrapper[29458]: E0308 22:16:30.474190 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:30.474758 master-0 kubenswrapper[29458]: E0308 22:16:30.474359 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:16:38.47431558 +0000 UTC m=+167.762373182 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:16:35.680245 master-0 kubenswrapper[29458]: I0308 22:16:35.680104 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:35.680923 master-0 kubenswrapper[29458]: E0308 22:16:35.680401 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:35.680923 master-0 kubenswrapper[29458]: E0308 22:16:35.680522 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:16:51.680501339 +0000 UTC m=+180.968558931 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:38.546472 master-0 kubenswrapper[29458]: I0308 22:16:38.546397 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:38.547022 master-0 kubenswrapper[29458]: E0308 22:16:38.546622 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:38.547022 master-0 kubenswrapper[29458]: E0308 22:16:38.546682 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:16:54.546664117 +0000 UTC m=+183.834721709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:16:50.298736 master-0 kubenswrapper[29458]: I0308 22:16:50.298584 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-9lgd2" event={"ID":"a43ce821-0c23-47fd-8c58-67152d39c232","Type":"ContainerStarted","Data":"4da3dbaa3d7951f9ba6f368dcce15f6802f7d5c2460ead276168d91a6b385d0d"} Mar 08 22:16:50.299829 master-0 kubenswrapper[29458]: I0308 22:16:50.299801 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:16:50.300747 master-0 kubenswrapper[29458]: I0308 22:16:50.300716 29458 patch_prober.go:28] interesting pod/downloads-84f57b9877-9lgd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" start-of-body= Mar 08 22:16:50.300822 master-0 kubenswrapper[29458]: I0308 22:16:50.300755 29458 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-9lgd2" podUID="a43ce821-0c23-47fd-8c58-67152d39c232" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" Mar 08 22:16:50.325090 master-0 kubenswrapper[29458]: I0308 22:16:50.324981 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-9lgd2" podStartSLOduration=1.381981564 podStartE2EDuration="37.324960627s" podCreationTimestamp="2026-03-08 22:16:13 +0000 UTC" firstStartedPulling="2026-03-08 22:16:14.047413142 +0000 UTC m=+143.335470744" lastFinishedPulling="2026-03-08 22:16:49.990392175 +0000 UTC m=+179.278449807" observedRunningTime="2026-03-08 22:16:50.319689328 +0000 UTC m=+179.607746920" watchObservedRunningTime="2026-03-08 22:16:50.324960627 +0000 UTC m=+179.613018219" Mar 08 22:16:51.306880 master-0 kubenswrapper[29458]: I0308 22:16:51.306812 29458 patch_prober.go:28] interesting pod/downloads-84f57b9877-9lgd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" start-of-body= Mar 08 22:16:51.307670 master-0 kubenswrapper[29458]: I0308 22:16:51.306899 29458 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-9lgd2" podUID="a43ce821-0c23-47fd-8c58-67152d39c232" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" Mar 08 22:16:51.684223 master-0 kubenswrapper[29458]: I0308 22:16:51.684119 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") pod \"console-6cb77ccb47-w9dnw\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:16:51.684670 master-0 kubenswrapper[29458]: E0308 22:16:51.684457 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:51.684670 master-0 kubenswrapper[29458]: E0308 22:16:51.684586 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert podName:a7c39000-e378-449c-b387-249518e9a1e9 nodeName:}" failed. No retries permitted until 2026-03-08 22:17:23.684542791 +0000 UTC m=+212.972600393 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert") pod "console-6cb77ccb47-w9dnw" (UID: "a7c39000-e378-449c-b387-249518e9a1e9") : secret "console-serving-cert" not found Mar 08 22:16:52.315994 master-0 kubenswrapper[29458]: I0308 22:16:52.315904 29458 patch_prober.go:28] interesting pod/downloads-84f57b9877-9lgd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" start-of-body= Mar 08 22:16:52.317190 master-0 kubenswrapper[29458]: I0308 22:16:52.316016 29458 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-9lgd2" podUID="a43ce821-0c23-47fd-8c58-67152d39c232" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" Mar 08 22:16:53.451891 master-0 kubenswrapper[29458]: I0308 22:16:53.451775 29458 patch_prober.go:28] interesting pod/downloads-84f57b9877-9lgd2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" start-of-body= Mar 08 22:16:53.451891 master-0 kubenswrapper[29458]: I0308 22:16:53.451862 29458 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-9lgd2" podUID="a43ce821-0c23-47fd-8c58-67152d39c232" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" Mar 08 22:16:53.453205 master-0 kubenswrapper[29458]: I0308 22:16:53.451897 29458 patch_prober.go:28] interesting pod/downloads-84f57b9877-9lgd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" start-of-body= Mar 08 22:16:53.453205 master-0 kubenswrapper[29458]: I0308 22:16:53.451965 29458 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-9lgd2" podUID="a43ce821-0c23-47fd-8c58-67152d39c232" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.93:8080/\": dial tcp 10.128.0.93:8080: connect: connection refused" Mar 08 22:16:54.552134 master-0 kubenswrapper[29458]: I0308 22:16:54.552040 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:16:54.553183 master-0 kubenswrapper[29458]: E0308 22:16:54.552366 29458 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 22:16:54.553342 master-0 kubenswrapper[29458]: E0308 22:16:54.553241 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert podName:477076a3-21e3-4a37-a442-54cd4d4ff12e nodeName:}" failed. No retries permitted until 2026-03-08 22:17:26.55321464 +0000 UTC m=+215.841272252 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert") pod "console-798c96757f-zln5h" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e") : secret "console-serving-cert" not found Mar 08 22:17:03.461327 master-0 kubenswrapper[29458]: I0308 22:17:03.461227 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-9lgd2" Mar 08 22:17:19.186402 master-0 kubenswrapper[29458]: I0308 22:17:19.186300 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6cb77ccb47-w9dnw"] Mar 08 22:17:19.187439 master-0 kubenswrapper[29458]: E0308 22:17:19.186956 29458 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[console-serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-console/console-6cb77ccb47-w9dnw" podUID="a7c39000-e378-449c-b387-249518e9a1e9" Mar 08 22:17:19.225923 master-0 kubenswrapper[29458]: I0308 22:17:19.225833 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-695dfc9f84-n5pqv"] Mar 08 22:17:19.228326 master-0 kubenswrapper[29458]: I0308 22:17:19.228240 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.254016 master-0 kubenswrapper[29458]: I0308 22:17:19.253945 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-695dfc9f84-n5pqv"] Mar 08 22:17:19.322510 master-0 kubenswrapper[29458]: I0308 22:17:19.322303 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-serving-cert\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.322510 master-0 kubenswrapper[29458]: I0308 22:17:19.322414 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjd9f\" (UniqueName: \"kubernetes.io/projected/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-kube-api-access-qjd9f\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.322510 master-0 kubenswrapper[29458]: I0308 22:17:19.322460 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-oauth-serving-cert\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.322963 master-0 kubenswrapper[29458]: I0308 22:17:19.322676 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-config\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.322963 master-0 kubenswrapper[29458]: I0308 22:17:19.322743 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-oauth-config\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.323767 master-0 kubenswrapper[29458]: I0308 22:17:19.323047 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-service-ca\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.323767 master-0 kubenswrapper[29458]: I0308 22:17:19.323107 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-trusted-ca-bundle\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.425249 master-0 kubenswrapper[29458]: I0308 22:17:19.425168 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-service-ca\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.425249 master-0 kubenswrapper[29458]: I0308 22:17:19.425221 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-trusted-ca-bundle\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426176 master-0 kubenswrapper[29458]: I0308 22:17:19.426141 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-service-ca\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426498 master-0 kubenswrapper[29458]: I0308 22:17:19.425247 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-serving-cert\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426646 master-0 kubenswrapper[29458]: I0308 22:17:19.426608 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-trusted-ca-bundle\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426700 master-0 kubenswrapper[29458]: I0308 22:17:19.426666 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjd9f\" (UniqueName: \"kubernetes.io/projected/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-kube-api-access-qjd9f\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426833 master-0 kubenswrapper[29458]: I0308 22:17:19.426706 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-oauth-serving-cert\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426833 master-0 kubenswrapper[29458]: I0308 22:17:19.426826 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-config\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.426993 master-0 kubenswrapper[29458]: I0308 22:17:19.426947 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-oauth-config\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.427502 master-0 kubenswrapper[29458]: I0308 22:17:19.427454 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-oauth-serving-cert\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.427655 master-0 kubenswrapper[29458]: I0308 22:17:19.427598 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-config\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.430545 master-0 kubenswrapper[29458]: I0308 22:17:19.430507 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-serving-cert\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.432400 master-0 kubenswrapper[29458]: I0308 22:17:19.432359 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-oauth-config\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.447094 master-0 kubenswrapper[29458]: I0308 22:17:19.446979 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjd9f\" (UniqueName: \"kubernetes.io/projected/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-kube-api-access-qjd9f\") pod \"console-695dfc9f84-n5pqv\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.574708 master-0 kubenswrapper[29458]: I0308 22:17:19.574617 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:17:19.586279 master-0 kubenswrapper[29458]: I0308 22:17:19.586220 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:17:19.589452 master-0 kubenswrapper[29458]: I0308 22:17:19.589416 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-hxnnm" Mar 08 22:17:19.597626 master-0 kubenswrapper[29458]: I0308 22:17:19.597600 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:19.630511 master-0 kubenswrapper[29458]: I0308 22:17:19.630443 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-service-ca\") pod \"a7c39000-e378-449c-b387-249518e9a1e9\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " Mar 08 22:17:19.630767 master-0 kubenswrapper[29458]: I0308 22:17:19.630551 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-oauth-config\") pod \"a7c39000-e378-449c-b387-249518e9a1e9\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " Mar 08 22:17:19.630767 master-0 kubenswrapper[29458]: I0308 22:17:19.630620 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-oauth-serving-cert\") pod \"a7c39000-e378-449c-b387-249518e9a1e9\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " Mar 08 22:17:19.630767 master-0 kubenswrapper[29458]: I0308 22:17:19.630654 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sxd2\" (UniqueName: \"kubernetes.io/projected/a7c39000-e378-449c-b387-249518e9a1e9-kube-api-access-8sxd2\") pod \"a7c39000-e378-449c-b387-249518e9a1e9\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " Mar 08 22:17:19.630767 master-0 kubenswrapper[29458]: I0308 22:17:19.630679 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-console-config\") pod \"a7c39000-e378-449c-b387-249518e9a1e9\" (UID: \"a7c39000-e378-449c-b387-249518e9a1e9\") " Mar 08 22:17:19.631474 master-0 kubenswrapper[29458]: I0308 22:17:19.631441 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-console-config" (OuterVolumeSpecName: "console-config") pod "a7c39000-e378-449c-b387-249518e9a1e9" (UID: "a7c39000-e378-449c-b387-249518e9a1e9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:19.632049 master-0 kubenswrapper[29458]: I0308 22:17:19.631966 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a7c39000-e378-449c-b387-249518e9a1e9" (UID: "a7c39000-e378-449c-b387-249518e9a1e9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:19.633555 master-0 kubenswrapper[29458]: I0308 22:17:19.633463 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-service-ca" (OuterVolumeSpecName: "service-ca") pod "a7c39000-e378-449c-b387-249518e9a1e9" (UID: "a7c39000-e378-449c-b387-249518e9a1e9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:19.636842 master-0 kubenswrapper[29458]: I0308 22:17:19.636745 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c39000-e378-449c-b387-249518e9a1e9-kube-api-access-8sxd2" (OuterVolumeSpecName: "kube-api-access-8sxd2") pod "a7c39000-e378-449c-b387-249518e9a1e9" (UID: "a7c39000-e378-449c-b387-249518e9a1e9"). InnerVolumeSpecName "kube-api-access-8sxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:17:19.638773 master-0 kubenswrapper[29458]: I0308 22:17:19.638724 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a7c39000-e378-449c-b387-249518e9a1e9" (UID: "a7c39000-e378-449c-b387-249518e9a1e9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:19.739685 master-0 kubenswrapper[29458]: I0308 22:17:19.739629 29458 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:19.739685 master-0 kubenswrapper[29458]: I0308 22:17:19.739676 29458 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:19.739685 master-0 kubenswrapper[29458]: I0308 22:17:19.739689 29458 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:19.740089 master-0 kubenswrapper[29458]: I0308 22:17:19.739699 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sxd2\" (UniqueName: \"kubernetes.io/projected/a7c39000-e378-449c-b387-249518e9a1e9-kube-api-access-8sxd2\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:19.740089 master-0 kubenswrapper[29458]: I0308 22:17:19.739709 29458 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c39000-e378-449c-b387-249518e9a1e9-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:20.041526 master-0 kubenswrapper[29458]: I0308 22:17:20.041460 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-695dfc9f84-n5pqv"] Mar 08 22:17:20.592411 master-0 kubenswrapper[29458]: I0308 22:17:20.588190 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6cb77ccb47-w9dnw" Mar 08 22:17:20.592411 master-0 kubenswrapper[29458]: I0308 22:17:20.588236 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695dfc9f84-n5pqv" event={"ID":"85d1ad38-c1b6-4fc4-a852-703ba6171ca3","Type":"ContainerStarted","Data":"d351772f0daf236d6c0bef90f3b6ff8dcc2b25792df1488b1f028ed4d53e79b7"} Mar 08 22:17:20.671189 master-0 kubenswrapper[29458]: I0308 22:17:20.671102 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6cb77ccb47-w9dnw"] Mar 08 22:17:20.680163 master-0 kubenswrapper[29458]: I0308 22:17:20.679037 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6cb77ccb47-w9dnw"] Mar 08 22:17:20.758734 master-0 kubenswrapper[29458]: I0308 22:17:20.758650 29458 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c39000-e378-449c-b387-249518e9a1e9-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:20.991395 master-0 kubenswrapper[29458]: I0308 22:17:20.991224 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c39000-e378-449c-b387-249518e9a1e9" path="/var/lib/kubelet/pods/a7c39000-e378-449c-b387-249518e9a1e9/volumes" Mar 08 22:17:23.666687 master-0 kubenswrapper[29458]: I0308 22:17:23.666566 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-c84587d9b-7j6cs"] Mar 08 22:17:25.636841 master-0 kubenswrapper[29458]: I0308 22:17:25.636784 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695dfc9f84-n5pqv" event={"ID":"85d1ad38-c1b6-4fc4-a852-703ba6171ca3","Type":"ContainerStarted","Data":"4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164"} Mar 08 22:17:25.947616 master-0 kubenswrapper[29458]: I0308 22:17:25.947351 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-695dfc9f84-n5pqv" podStartSLOduration=2.130756993 podStartE2EDuration="6.947317934s" podCreationTimestamp="2026-03-08 22:17:19 +0000 UTC" firstStartedPulling="2026-03-08 22:17:20.049526121 +0000 UTC m=+209.337583723" lastFinishedPulling="2026-03-08 22:17:24.866087072 +0000 UTC m=+214.154144664" observedRunningTime="2026-03-08 22:17:25.943115049 +0000 UTC m=+215.231172671" watchObservedRunningTime="2026-03-08 22:17:25.947317934 +0000 UTC m=+215.235375556" Mar 08 22:17:26.588925 master-0 kubenswrapper[29458]: I0308 22:17:26.588827 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:17:26.595024 master-0 kubenswrapper[29458]: I0308 22:17:26.594963 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"console-798c96757f-zln5h\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:17:26.825728 master-0 kubenswrapper[29458]: I0308 22:17:26.825642 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:17:27.328145 master-0 kubenswrapper[29458]: I0308 22:17:27.328005 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-798c96757f-zln5h"] Mar 08 22:17:27.332404 master-0 kubenswrapper[29458]: W0308 22:17:27.332324 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod477076a3_21e3_4a37_a442_54cd4d4ff12e.slice/crio-15bedef1821aaf15c54b7aa14c5678510db82a1a5b352b252d708d77f4fc038e WatchSource:0}: Error finding container 15bedef1821aaf15c54b7aa14c5678510db82a1a5b352b252d708d77f4fc038e: Status 404 returned error can't find the container with id 15bedef1821aaf15c54b7aa14c5678510db82a1a5b352b252d708d77f4fc038e Mar 08 22:17:27.668412 master-0 kubenswrapper[29458]: I0308 22:17:27.668357 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-798c96757f-zln5h_477076a3-21e3-4a37-a442-54cd4d4ff12e/console/0.log" Mar 08 22:17:27.668715 master-0 kubenswrapper[29458]: I0308 22:17:27.668439 29458 generic.go:334] "Generic (PLEG): container finished" podID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerID="056b51563aade541c2739eb8b58841beea5973bd28673ebd863e34fd95f21485" exitCode=255 Mar 08 22:17:27.668715 master-0 kubenswrapper[29458]: I0308 22:17:27.668501 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-798c96757f-zln5h" event={"ID":"477076a3-21e3-4a37-a442-54cd4d4ff12e","Type":"ContainerDied","Data":"056b51563aade541c2739eb8b58841beea5973bd28673ebd863e34fd95f21485"} Mar 08 22:17:27.668715 master-0 kubenswrapper[29458]: I0308 22:17:27.668573 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-798c96757f-zln5h" event={"ID":"477076a3-21e3-4a37-a442-54cd4d4ff12e","Type":"ContainerStarted","Data":"15bedef1821aaf15c54b7aa14c5678510db82a1a5b352b252d708d77f4fc038e"} Mar 08 22:17:27.669656 master-0 kubenswrapper[29458]: I0308 22:17:27.669594 29458 scope.go:117] "RemoveContainer" containerID="056b51563aade541c2739eb8b58841beea5973bd28673ebd863e34fd95f21485" Mar 08 22:17:27.935256 master-0 kubenswrapper[29458]: I0308 22:17:27.935036 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-798c96757f-zln5h"] Mar 08 22:17:27.978232 master-0 kubenswrapper[29458]: I0308 22:17:27.978157 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cbcdbfdc5-b5crv"] Mar 08 22:17:27.979224 master-0 kubenswrapper[29458]: I0308 22:17:27.979194 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.001920 master-0 kubenswrapper[29458]: I0308 22:17:28.001836 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cbcdbfdc5-b5crv"] Mar 08 22:17:28.050435 master-0 kubenswrapper[29458]: I0308 22:17:28.050360 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-serving-cert\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.050677 master-0 kubenswrapper[29458]: I0308 22:17:28.050554 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-console-config\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.050871 master-0 kubenswrapper[29458]: I0308 22:17:28.050832 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-oauth-serving-cert\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.050934 master-0 kubenswrapper[29458]: I0308 22:17:28.050881 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-service-ca\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.051054 master-0 kubenswrapper[29458]: I0308 22:17:28.051021 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-trusted-ca-bundle\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.051135 master-0 kubenswrapper[29458]: I0308 22:17:28.051063 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-oauth-config\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.051231 master-0 kubenswrapper[29458]: I0308 22:17:28.051207 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pttg\" (UniqueName: \"kubernetes.io/projected/4353e44c-f1db-4a07-b6bd-0feb86102961-kube-api-access-2pttg\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.153586 master-0 kubenswrapper[29458]: I0308 22:17:28.153402 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-console-config\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.153586 master-0 kubenswrapper[29458]: I0308 22:17:28.153487 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-oauth-serving-cert\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.153586 master-0 kubenswrapper[29458]: I0308 22:17:28.153525 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-service-ca\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.153586 master-0 kubenswrapper[29458]: I0308 22:17:28.153554 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-trusted-ca-bundle\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.154121 master-0 kubenswrapper[29458]: I0308 22:17:28.153956 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-oauth-config\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.154211 master-0 kubenswrapper[29458]: I0308 22:17:28.154163 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pttg\" (UniqueName: \"kubernetes.io/projected/4353e44c-f1db-4a07-b6bd-0feb86102961-kube-api-access-2pttg\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.154359 master-0 kubenswrapper[29458]: I0308 22:17:28.154325 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-serving-cert\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.154791 master-0 kubenswrapper[29458]: I0308 22:17:28.154764 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-service-ca\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.154976 master-0 kubenswrapper[29458]: I0308 22:17:28.154946 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-oauth-serving-cert\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.155493 master-0 kubenswrapper[29458]: I0308 22:17:28.155446 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-console-config\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.155617 master-0 kubenswrapper[29458]: I0308 22:17:28.155561 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-trusted-ca-bundle\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.157746 master-0 kubenswrapper[29458]: I0308 22:17:28.157707 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-serving-cert\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.161166 master-0 kubenswrapper[29458]: I0308 22:17:28.161107 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-oauth-config\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.184138 master-0 kubenswrapper[29458]: I0308 22:17:28.184056 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pttg\" (UniqueName: \"kubernetes.io/projected/4353e44c-f1db-4a07-b6bd-0feb86102961-kube-api-access-2pttg\") pod \"console-cbcdbfdc5-b5crv\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.294757 master-0 kubenswrapper[29458]: I0308 22:17:28.294682 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:28.679045 master-0 kubenswrapper[29458]: I0308 22:17:28.678979 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-798c96757f-zln5h_477076a3-21e3-4a37-a442-54cd4d4ff12e/console/0.log" Mar 08 22:17:28.679433 master-0 kubenswrapper[29458]: I0308 22:17:28.679102 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-798c96757f-zln5h" event={"ID":"477076a3-21e3-4a37-a442-54cd4d4ff12e","Type":"ContainerStarted","Data":"790255ffe12cb0e20e058053bac6dd672a26823a7a323e3ac6c9ba2431bd075b"} Mar 08 22:17:28.703911 master-0 kubenswrapper[29458]: I0308 22:17:28.703778 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-798c96757f-zln5h" podStartSLOduration=66.703754639 podStartE2EDuration="1m6.703754639s" podCreationTimestamp="2026-03-08 22:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:17:28.702818393 +0000 UTC m=+217.990875995" watchObservedRunningTime="2026-03-08 22:17:28.703754639 +0000 UTC m=+217.991812241" Mar 08 22:17:28.781748 master-0 kubenswrapper[29458]: I0308 22:17:28.781659 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cbcdbfdc5-b5crv"] Mar 08 22:17:29.598590 master-0 kubenswrapper[29458]: I0308 22:17:29.598450 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:29.598590 master-0 kubenswrapper[29458]: I0308 22:17:29.598568 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:17:29.600974 master-0 kubenswrapper[29458]: I0308 22:17:29.600901 29458 patch_prober.go:28] interesting pod/console-695dfc9f84-n5pqv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 08 22:17:29.601125 master-0 kubenswrapper[29458]: I0308 22:17:29.600996 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695dfc9f84-n5pqv" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 08 22:17:29.689379 master-0 kubenswrapper[29458]: I0308 22:17:29.689299 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbcdbfdc5-b5crv" event={"ID":"4353e44c-f1db-4a07-b6bd-0feb86102961","Type":"ContainerStarted","Data":"91fac66385dbbbb506127846352f85f89c6d82af8337ab47935210b5bc8e9c1a"} Mar 08 22:17:29.689379 master-0 kubenswrapper[29458]: I0308 22:17:29.689364 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbcdbfdc5-b5crv" event={"ID":"4353e44c-f1db-4a07-b6bd-0feb86102961","Type":"ContainerStarted","Data":"f7dfc27814b9cf2910eff1e67360a97f697b2da6c5d4f804a7edc708dc7a9cff"} Mar 08 22:17:29.722259 master-0 kubenswrapper[29458]: I0308 22:17:29.721963 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cbcdbfdc5-b5crv" podStartSLOduration=2.721939311 podStartE2EDuration="2.721939311s" podCreationTimestamp="2026-03-08 22:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:17:29.719937616 +0000 UTC m=+219.007995198" watchObservedRunningTime="2026-03-08 22:17:29.721939311 +0000 UTC m=+219.009996903" Mar 08 22:17:36.826286 master-0 kubenswrapper[29458]: I0308 22:17:36.826188 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:17:38.295971 master-0 kubenswrapper[29458]: I0308 22:17:38.295864 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:38.296898 master-0 kubenswrapper[29458]: I0308 22:17:38.295994 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:38.298177 master-0 kubenswrapper[29458]: I0308 22:17:38.298076 29458 patch_prober.go:28] interesting pod/console-cbcdbfdc5-b5crv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 22:17:38.298293 master-0 kubenswrapper[29458]: I0308 22:17:38.298220 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-cbcdbfdc5-b5crv" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 22:17:39.599789 master-0 kubenswrapper[29458]: I0308 22:17:39.599690 29458 patch_prober.go:28] interesting pod/console-695dfc9f84-n5pqv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 08 22:17:39.600902 master-0 kubenswrapper[29458]: I0308 22:17:39.599785 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695dfc9f84-n5pqv" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 08 22:17:48.296321 master-0 kubenswrapper[29458]: I0308 22:17:48.296208 29458 patch_prober.go:28] interesting pod/console-cbcdbfdc5-b5crv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 22:17:48.297563 master-0 kubenswrapper[29458]: I0308 22:17:48.296320 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-cbcdbfdc5-b5crv" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 22:17:48.700276 master-0 kubenswrapper[29458]: I0308 22:17:48.700199 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" podUID="20eeccf6-8546-446e-be99-555bcc738272" containerName="oauth-openshift" containerID="cri-o://a6d9f1e11c525793dca2ef77485eed2565fb204e43ed85234cb2499581944f03" gracePeriod=15 Mar 08 22:17:48.871129 master-0 kubenswrapper[29458]: I0308 22:17:48.870975 29458 patch_prober.go:28] interesting pod/oauth-openshift-c84587d9b-7j6cs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.90:6443/healthz\": dial tcp 10.128.0.90:6443: connect: connection refused" start-of-body= Mar 08 22:17:48.871129 master-0 kubenswrapper[29458]: I0308 22:17:48.871108 29458 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" podUID="20eeccf6-8546-446e-be99-555bcc738272" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.90:6443/healthz\": dial tcp 10.128.0.90:6443: connect: connection refused" Mar 08 22:17:48.892463 master-0 kubenswrapper[29458]: I0308 22:17:48.892359 29458 generic.go:334] "Generic (PLEG): container finished" podID="20eeccf6-8546-446e-be99-555bcc738272" containerID="a6d9f1e11c525793dca2ef77485eed2565fb204e43ed85234cb2499581944f03" exitCode=0 Mar 08 22:17:48.892463 master-0 kubenswrapper[29458]: I0308 22:17:48.892435 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" event={"ID":"20eeccf6-8546-446e-be99-555bcc738272","Type":"ContainerDied","Data":"a6d9f1e11c525793dca2ef77485eed2565fb204e43ed85234cb2499581944f03"} Mar 08 22:17:49.216947 master-0 kubenswrapper[29458]: I0308 22:17:49.216875 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:17:49.261973 master-0 kubenswrapper[29458]: I0308 22:17:49.261858 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-login\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.262351 master-0 kubenswrapper[29458]: I0308 22:17:49.262002 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-trusted-ca-bundle\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.262652 master-0 kubenswrapper[29458]: I0308 22:17:49.262599 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:49.262750 master-0 kubenswrapper[29458]: I0308 22:17:49.262732 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d64bc\" (UniqueName: \"kubernetes.io/projected/20eeccf6-8546-446e-be99-555bcc738272-kube-api-access-d64bc\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.262826 master-0 kubenswrapper[29458]: I0308 22:17:49.262794 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-serving-cert\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.262899 master-0 kubenswrapper[29458]: I0308 22:17:49.262831 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-provider-selection\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.262899 master-0 kubenswrapper[29458]: I0308 22:17:49.262856 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-service-ca\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.262899 master-0 kubenswrapper[29458]: I0308 22:17:49.262881 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-audit-policies\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263194 master-0 kubenswrapper[29458]: I0308 22:17:49.262903 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-error\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263194 master-0 kubenswrapper[29458]: I0308 22:17:49.262927 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20eeccf6-8546-446e-be99-555bcc738272-audit-dir\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263194 master-0 kubenswrapper[29458]: I0308 22:17:49.262949 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-router-certs\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263194 master-0 kubenswrapper[29458]: I0308 22:17:49.262998 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-session\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263194 master-0 kubenswrapper[29458]: I0308 22:17:49.263030 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-ocp-branding-template\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263194 master-0 kubenswrapper[29458]: I0308 22:17:49.263108 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-cliconfig\") pod \"20eeccf6-8546-446e-be99-555bcc738272\" (UID: \"20eeccf6-8546-446e-be99-555bcc738272\") " Mar 08 22:17:49.263594 master-0 kubenswrapper[29458]: I0308 22:17:49.263460 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.264066 master-0 kubenswrapper[29458]: I0308 22:17:49.264016 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:49.265426 master-0 kubenswrapper[29458]: I0308 22:17:49.265392 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.267301 master-0 kubenswrapper[29458]: I0308 22:17:49.267255 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20eeccf6-8546-446e-be99-555bcc738272-kube-api-access-d64bc" (OuterVolumeSpecName: "kube-api-access-d64bc") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "kube-api-access-d64bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:17:49.267644 master-0 kubenswrapper[29458]: I0308 22:17:49.267605 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.267747 master-0 kubenswrapper[29458]: I0308 22:17:49.267674 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20eeccf6-8546-446e-be99-555bcc738272-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:17:49.273258 master-0 kubenswrapper[29458]: I0308 22:17:49.273179 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d"] Mar 08 22:17:49.273656 master-0 kubenswrapper[29458]: E0308 22:17:49.273612 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeccf6-8546-446e-be99-555bcc738272" containerName="oauth-openshift" Mar 08 22:17:49.273656 master-0 kubenswrapper[29458]: I0308 22:17:49.273641 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeccf6-8546-446e-be99-555bcc738272" containerName="oauth-openshift" Mar 08 22:17:49.273889 master-0 kubenswrapper[29458]: I0308 22:17:49.273847 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeccf6-8546-446e-be99-555bcc738272" containerName="oauth-openshift" Mar 08 22:17:49.273889 master-0 kubenswrapper[29458]: I0308 22:17:49.273842 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.274618 master-0 kubenswrapper[29458]: I0308 22:17:49.274501 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.276732 master-0 kubenswrapper[29458]: I0308 22:17:49.275825 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:49.278660 master-0 kubenswrapper[29458]: I0308 22:17:49.278600 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:49.278790 master-0 kubenswrapper[29458]: I0308 22:17:49.278742 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.278790 master-0 kubenswrapper[29458]: I0308 22:17:49.278738 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.285693 master-0 kubenswrapper[29458]: I0308 22:17:49.285227 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d"] Mar 08 22:17:49.286354 master-0 kubenswrapper[29458]: I0308 22:17:49.286040 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.289398 master-0 kubenswrapper[29458]: I0308 22:17:49.289295 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "20eeccf6-8546-446e-be99-555bcc738272" (UID: "20eeccf6-8546-446e-be99-555bcc738272"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:49.364944 master-0 kubenswrapper[29458]: I0308 22:17:49.364873 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-service-ca\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.364944 master-0 kubenswrapper[29458]: I0308 22:17:49.364936 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-session\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.364964 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365050 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-audit-policies\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365154 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365196 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365237 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-router-certs\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365307 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365328 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-login\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365367 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxw85\" (UniqueName: \"kubernetes.io/projected/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-kube-api-access-mxw85\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365401 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-audit-dir\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365422 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365448 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-error\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365507 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365518 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365529 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365539 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365554 29458 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20eeccf6-8546-446e-be99-555bcc738272-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365564 29458 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365572 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365581 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365592 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365601 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365613 29458 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20eeccf6-8546-446e-be99-555bcc738272-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.365706 master-0 kubenswrapper[29458]: I0308 22:17:49.365621 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d64bc\" (UniqueName: \"kubernetes.io/projected/20eeccf6-8546-446e-be99-555bcc738272-kube-api-access-d64bc\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:49.467653 master-0 kubenswrapper[29458]: I0308 22:17:49.467477 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.467653 master-0 kubenswrapper[29458]: I0308 22:17:49.467551 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-login\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.467653 master-0 kubenswrapper[29458]: I0308 22:17:49.467584 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxw85\" (UniqueName: \"kubernetes.io/projected/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-kube-api-access-mxw85\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.468123 master-0 kubenswrapper[29458]: I0308 22:17:49.467694 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-audit-dir\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.468123 master-0 kubenswrapper[29458]: I0308 22:17:49.467790 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.468123 master-0 kubenswrapper[29458]: I0308 22:17:49.467990 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-audit-dir\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468106 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-error\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468321 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-service-ca\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468410 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-session\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468508 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468568 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-audit-policies\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468615 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468855 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.468916 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-router-certs\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.469773 master-0 kubenswrapper[29458]: I0308 22:17:49.469438 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.470814 master-0 kubenswrapper[29458]: I0308 22:17:49.470118 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-service-ca\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.470814 master-0 kubenswrapper[29458]: I0308 22:17:49.470171 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-audit-policies\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.473760 master-0 kubenswrapper[29458]: I0308 22:17:49.473682 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.473760 master-0 kubenswrapper[29458]: I0308 22:17:49.473708 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-error\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.474120 master-0 kubenswrapper[29458]: I0308 22:17:49.474091 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.474387 master-0 kubenswrapper[29458]: I0308 22:17:49.474343 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.474845 master-0 kubenswrapper[29458]: I0308 22:17:49.474777 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-user-template-login\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.475465 master-0 kubenswrapper[29458]: I0308 22:17:49.475402 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-router-certs\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.476041 master-0 kubenswrapper[29458]: I0308 22:17:49.475994 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-session\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.476379 master-0 kubenswrapper[29458]: I0308 22:17:49.476335 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.500447 master-0 kubenswrapper[29458]: I0308 22:17:49.500382 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxw85\" (UniqueName: \"kubernetes.io/projected/20fe0df1-74a0-45a3-8e6d-d52394c8ebbf-kube-api-access-mxw85\") pod \"oauth-openshift-79fb9b5d66-rsn9d\" (UID: \"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf\") " pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.598657 master-0 kubenswrapper[29458]: I0308 22:17:49.598579 29458 patch_prober.go:28] interesting pod/console-695dfc9f84-n5pqv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 08 22:17:49.599224 master-0 kubenswrapper[29458]: I0308 22:17:49.599171 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-695dfc9f84-n5pqv" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 08 22:17:49.638580 master-0 kubenswrapper[29458]: I0308 22:17:49.638437 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:49.903260 master-0 kubenswrapper[29458]: I0308 22:17:49.902664 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" event={"ID":"20eeccf6-8546-446e-be99-555bcc738272","Type":"ContainerDied","Data":"20a4b2bfc53e1f0a3c68b7d82be12654f24f7987d924f354f6872a83092ed569"} Mar 08 22:17:49.903260 master-0 kubenswrapper[29458]: I0308 22:17:49.902741 29458 scope.go:117] "RemoveContainer" containerID="a6d9f1e11c525793dca2ef77485eed2565fb204e43ed85234cb2499581944f03" Mar 08 22:17:49.903260 master-0 kubenswrapper[29458]: I0308 22:17:49.902764 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c84587d9b-7j6cs" Mar 08 22:17:49.953284 master-0 kubenswrapper[29458]: I0308 22:17:49.953229 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-c84587d9b-7j6cs"] Mar 08 22:17:49.956833 master-0 kubenswrapper[29458]: I0308 22:17:49.956775 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-c84587d9b-7j6cs"] Mar 08 22:17:50.173918 master-0 kubenswrapper[29458]: I0308 22:17:50.170458 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-fpplh"] Mar 08 22:17:50.173918 master-0 kubenswrapper[29458]: I0308 22:17:50.171858 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.183128 master-0 kubenswrapper[29458]: I0308 22:17:50.182392 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 22:17:50.183128 master-0 kubenswrapper[29458]: I0308 22:17:50.182677 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 22:17:50.184755 master-0 kubenswrapper[29458]: I0308 22:17:50.184715 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d"] Mar 08 22:17:50.190927 master-0 kubenswrapper[29458]: I0308 22:17:50.190835 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-fpplh"] Mar 08 22:17:50.283630 master-0 kubenswrapper[29458]: I0308 22:17:50.283558 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f2ddba33-5b68-433e-a146-ac15ea2aabb6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.283881 master-0 kubenswrapper[29458]: I0308 22:17:50.283762 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f2ddba33-5b68-433e-a146-ac15ea2aabb6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.385421 master-0 kubenswrapper[29458]: I0308 22:17:50.385132 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f2ddba33-5b68-433e-a146-ac15ea2aabb6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.385421 master-0 kubenswrapper[29458]: I0308 22:17:50.385267 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f2ddba33-5b68-433e-a146-ac15ea2aabb6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.386198 master-0 kubenswrapper[29458]: I0308 22:17:50.386151 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f2ddba33-5b68-433e-a146-ac15ea2aabb6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.386316 master-0 kubenswrapper[29458]: E0308 22:17:50.386280 29458 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 08 22:17:50.386361 master-0 kubenswrapper[29458]: E0308 22:17:50.386354 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2ddba33-5b68-433e-a146-ac15ea2aabb6-networking-console-plugin-cert podName:f2ddba33-5b68-433e-a146-ac15ea2aabb6 nodeName:}" failed. No retries permitted until 2026-03-08 22:17:50.886338169 +0000 UTC m=+240.174395761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/f2ddba33-5b68-433e-a146-ac15ea2aabb6-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-fpplh" (UID: "f2ddba33-5b68-433e-a146-ac15ea2aabb6") : secret "networking-console-plugin-cert" not found Mar 08 22:17:50.893696 master-0 kubenswrapper[29458]: I0308 22:17:50.893562 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f2ddba33-5b68-433e-a146-ac15ea2aabb6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.900029 master-0 kubenswrapper[29458]: I0308 22:17:50.899964 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f2ddba33-5b68-433e-a146-ac15ea2aabb6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-fpplh\" (UID: \"f2ddba33-5b68-433e-a146-ac15ea2aabb6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:50.917021 master-0 kubenswrapper[29458]: I0308 22:17:50.916913 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" event={"ID":"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf","Type":"ContainerStarted","Data":"c3e2a78fcdb6e34561e3d964edc145ae5c9f054e808a86569f7ed50ed94f6d3f"} Mar 08 22:17:50.917021 master-0 kubenswrapper[29458]: I0308 22:17:50.917003 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" event={"ID":"20fe0df1-74a0-45a3-8e6d-d52394c8ebbf","Type":"ContainerStarted","Data":"5c9c514238a26f643182fb99fa51a6f0bc953d01225b58c84bfdf0f763c020bb"} Mar 08 22:17:50.918034 master-0 kubenswrapper[29458]: I0308 22:17:50.917979 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:50.929417 master-0 kubenswrapper[29458]: I0308 22:17:50.929350 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" Mar 08 22:17:50.965257 master-0 kubenswrapper[29458]: I0308 22:17:50.965056 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79fb9b5d66-rsn9d" podStartSLOduration=27.96501692 podStartE2EDuration="27.96501692s" podCreationTimestamp="2026-03-08 22:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:17:50.954300616 +0000 UTC m=+240.242358288" watchObservedRunningTime="2026-03-08 22:17:50.96501692 +0000 UTC m=+240.253074552" Mar 08 22:17:50.988923 master-0 kubenswrapper[29458]: I0308 22:17:50.988780 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20eeccf6-8546-446e-be99-555bcc738272" path="/var/lib/kubelet/pods/20eeccf6-8546-446e-be99-555bcc738272/volumes" Mar 08 22:17:51.127107 master-0 kubenswrapper[29458]: I0308 22:17:51.127016 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" Mar 08 22:17:51.136102 master-0 kubenswrapper[29458]: I0308 22:17:51.134120 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-695dfc9f84-n5pqv"] Mar 08 22:17:51.165217 master-0 kubenswrapper[29458]: I0308 22:17:51.162596 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6994646879-wvkdk"] Mar 08 22:17:51.165217 master-0 kubenswrapper[29458]: I0308 22:17:51.163492 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.204980 master-0 kubenswrapper[29458]: I0308 22:17:51.204921 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6994646879-wvkdk"] Mar 08 22:17:51.207654 master-0 kubenswrapper[29458]: I0308 22:17:51.207602 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fj5t\" (UniqueName: \"kubernetes.io/projected/3110f839-30af-42b0-87a0-39ae9db0da4f-kube-api-access-6fj5t\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.207735 master-0 kubenswrapper[29458]: I0308 22:17:51.207715 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-console-config\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.207774 master-0 kubenswrapper[29458]: I0308 22:17:51.207742 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-service-ca\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.207774 master-0 kubenswrapper[29458]: I0308 22:17:51.207764 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-oauth-config\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.207835 master-0 kubenswrapper[29458]: I0308 22:17:51.207796 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-oauth-serving-cert\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.207866 master-0 kubenswrapper[29458]: I0308 22:17:51.207834 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-trusted-ca-bundle\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.208227 master-0 kubenswrapper[29458]: I0308 22:17:51.207874 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-serving-cert\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317016 master-0 kubenswrapper[29458]: I0308 22:17:51.316944 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-console-config\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317016 master-0 kubenswrapper[29458]: I0308 22:17:51.317006 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-service-ca\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317313 master-0 kubenswrapper[29458]: I0308 22:17:51.317036 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-oauth-config\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317313 master-0 kubenswrapper[29458]: I0308 22:17:51.317066 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-oauth-serving-cert\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317313 master-0 kubenswrapper[29458]: I0308 22:17:51.317171 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-trusted-ca-bundle\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317313 master-0 kubenswrapper[29458]: I0308 22:17:51.317296 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-serving-cert\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.317467 master-0 kubenswrapper[29458]: I0308 22:17:51.317337 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fj5t\" (UniqueName: \"kubernetes.io/projected/3110f839-30af-42b0-87a0-39ae9db0da4f-kube-api-access-6fj5t\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.318657 master-0 kubenswrapper[29458]: I0308 22:17:51.318625 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-oauth-serving-cert\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.323605 master-0 kubenswrapper[29458]: I0308 22:17:51.323553 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-trusted-ca-bundle\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.328023 master-0 kubenswrapper[29458]: I0308 22:17:51.327983 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-service-ca\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.329315 master-0 kubenswrapper[29458]: I0308 22:17:51.329272 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-console-config\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.330200 master-0 kubenswrapper[29458]: I0308 22:17:51.330167 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-oauth-config\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.334925 master-0 kubenswrapper[29458]: I0308 22:17:51.334887 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fj5t\" (UniqueName: \"kubernetes.io/projected/3110f839-30af-42b0-87a0-39ae9db0da4f-kube-api-access-6fj5t\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.336021 master-0 kubenswrapper[29458]: I0308 22:17:51.335988 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-serving-cert\") pod \"console-6994646879-wvkdk\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.543101 master-0 kubenswrapper[29458]: I0308 22:17:51.542863 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:17:51.643432 master-0 kubenswrapper[29458]: I0308 22:17:51.643359 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-fpplh"] Mar 08 22:17:51.652984 master-0 kubenswrapper[29458]: W0308 22:17:51.651999 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ddba33_5b68_433e_a146_ac15ea2aabb6.slice/crio-b556cf48fe0d479c98414e80ca4fdf03ddda979c55a27d419aa849c88368ded4 WatchSource:0}: Error finding container b556cf48fe0d479c98414e80ca4fdf03ddda979c55a27d419aa849c88368ded4: Status 404 returned error can't find the container with id b556cf48fe0d479c98414e80ca4fdf03ddda979c55a27d419aa849c88368ded4 Mar 08 22:17:51.931956 master-0 kubenswrapper[29458]: I0308 22:17:51.931873 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" event={"ID":"f2ddba33-5b68-433e-a146-ac15ea2aabb6","Type":"ContainerStarted","Data":"b556cf48fe0d479c98414e80ca4fdf03ddda979c55a27d419aa849c88368ded4"} Mar 08 22:17:51.982300 master-0 kubenswrapper[29458]: I0308 22:17:51.981215 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6994646879-wvkdk"] Mar 08 22:17:51.986965 master-0 kubenswrapper[29458]: W0308 22:17:51.986919 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3110f839_30af_42b0_87a0_39ae9db0da4f.slice/crio-c30e18c90882efd2573c741d34a2c032f25aed8f79fb244fc308acc028d2c8e2 WatchSource:0}: Error finding container c30e18c90882efd2573c741d34a2c032f25aed8f79fb244fc308acc028d2c8e2: Status 404 returned error can't find the container with id c30e18c90882efd2573c741d34a2c032f25aed8f79fb244fc308acc028d2c8e2 Mar 08 22:17:52.956708 master-0 kubenswrapper[29458]: I0308 22:17:52.956644 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6994646879-wvkdk" event={"ID":"3110f839-30af-42b0-87a0-39ae9db0da4f","Type":"ContainerStarted","Data":"2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313"} Mar 08 22:17:52.956708 master-0 kubenswrapper[29458]: I0308 22:17:52.956715 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6994646879-wvkdk" event={"ID":"3110f839-30af-42b0-87a0-39ae9db0da4f","Type":"ContainerStarted","Data":"c30e18c90882efd2573c741d34a2c032f25aed8f79fb244fc308acc028d2c8e2"} Mar 08 22:17:53.967468 master-0 kubenswrapper[29458]: I0308 22:17:53.967370 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" event={"ID":"f2ddba33-5b68-433e-a146-ac15ea2aabb6","Type":"ContainerStarted","Data":"04e82725f9d9aae558c5760b59cecbdd575801d73b85f48d239106744edecab6"} Mar 08 22:17:53.994557 master-0 kubenswrapper[29458]: I0308 22:17:53.994290 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-fpplh" podStartSLOduration=2.236343559 podStartE2EDuration="3.994259731s" podCreationTimestamp="2026-03-08 22:17:50 +0000 UTC" firstStartedPulling="2026-03-08 22:17:51.659402366 +0000 UTC m=+240.947459978" lastFinishedPulling="2026-03-08 22:17:53.417318558 +0000 UTC m=+242.705376150" observedRunningTime="2026-03-08 22:17:53.99384791 +0000 UTC m=+243.281905542" watchObservedRunningTime="2026-03-08 22:17:53.994259731 +0000 UTC m=+243.282317333" Mar 08 22:17:53.998377 master-0 kubenswrapper[29458]: I0308 22:17:53.998323 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6994646879-wvkdk" podStartSLOduration=2.998305812 podStartE2EDuration="2.998305812s" podCreationTimestamp="2026-03-08 22:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:17:52.997121657 +0000 UTC m=+242.285179259" watchObservedRunningTime="2026-03-08 22:17:53.998305812 +0000 UTC m=+243.286363404" Mar 08 22:17:54.735156 master-0 kubenswrapper[29458]: I0308 22:17:54.735031 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-798c96757f-zln5h" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" containerID="cri-o://790255ffe12cb0e20e058053bac6dd672a26823a7a323e3ac6c9ba2431bd075b" gracePeriod=15 Mar 08 22:17:54.979236 master-0 kubenswrapper[29458]: I0308 22:17:54.979168 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-798c96757f-zln5h_477076a3-21e3-4a37-a442-54cd4d4ff12e/console/1.log" Mar 08 22:17:54.980208 master-0 kubenswrapper[29458]: I0308 22:17:54.979672 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-798c96757f-zln5h_477076a3-21e3-4a37-a442-54cd4d4ff12e/console/0.log" Mar 08 22:17:54.980208 master-0 kubenswrapper[29458]: I0308 22:17:54.979808 29458 generic.go:334] "Generic (PLEG): container finished" podID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerID="790255ffe12cb0e20e058053bac6dd672a26823a7a323e3ac6c9ba2431bd075b" exitCode=2 Mar 08 22:17:54.986870 master-0 kubenswrapper[29458]: I0308 22:17:54.986670 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-798c96757f-zln5h" event={"ID":"477076a3-21e3-4a37-a442-54cd4d4ff12e","Type":"ContainerDied","Data":"790255ffe12cb0e20e058053bac6dd672a26823a7a323e3ac6c9ba2431bd075b"} Mar 08 22:17:54.986870 master-0 kubenswrapper[29458]: I0308 22:17:54.986785 29458 scope.go:117] "RemoveContainer" containerID="056b51563aade541c2739eb8b58841beea5973bd28673ebd863e34fd95f21485" Mar 08 22:17:55.273351 master-0 kubenswrapper[29458]: I0308 22:17:55.273210 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-798c96757f-zln5h_477076a3-21e3-4a37-a442-54cd4d4ff12e/console/1.log" Mar 08 22:17:55.273351 master-0 kubenswrapper[29458]: I0308 22:17:55.273303 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:17:55.400585 master-0 kubenswrapper[29458]: I0308 22:17:55.400480 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.401473 master-0 kubenswrapper[29458]: I0308 22:17:55.400602 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-oauth-serving-cert\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.401473 master-0 kubenswrapper[29458]: I0308 22:17:55.400786 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-config\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.401473 master-0 kubenswrapper[29458]: I0308 22:17:55.401351 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:55.401473 master-0 kubenswrapper[29458]: I0308 22:17:55.401375 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqlz5\" (UniqueName: \"kubernetes.io/projected/477076a3-21e3-4a37-a442-54cd4d4ff12e-kube-api-access-fqlz5\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.402631 master-0 kubenswrapper[29458]: I0308 22:17:55.401682 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-trusted-ca-bundle\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.404571 master-0 kubenswrapper[29458]: I0308 22:17:55.404414 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-service-ca\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.404885 master-0 kubenswrapper[29458]: I0308 22:17:55.404673 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-oauth-config\") pod \"477076a3-21e3-4a37-a442-54cd4d4ff12e\" (UID: \"477076a3-21e3-4a37-a442-54cd4d4ff12e\") " Mar 08 22:17:55.404885 master-0 kubenswrapper[29458]: I0308 22:17:55.401808 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-config" (OuterVolumeSpecName: "console-config") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:55.405696 master-0 kubenswrapper[29458]: I0308 22:17:55.405631 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-service-ca" (OuterVolumeSpecName: "service-ca") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:55.405928 master-0 kubenswrapper[29458]: I0308 22:17:55.405702 29458 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.405928 master-0 kubenswrapper[29458]: I0308 22:17:55.405735 29458 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.406911 master-0 kubenswrapper[29458]: I0308 22:17:55.406814 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:17:55.408983 master-0 kubenswrapper[29458]: I0308 22:17:55.408937 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477076a3-21e3-4a37-a442-54cd4d4ff12e-kube-api-access-fqlz5" (OuterVolumeSpecName: "kube-api-access-fqlz5") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "kube-api-access-fqlz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:17:55.410235 master-0 kubenswrapper[29458]: I0308 22:17:55.410051 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:55.410379 master-0 kubenswrapper[29458]: I0308 22:17:55.410240 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "477076a3-21e3-4a37-a442-54cd4d4ff12e" (UID: "477076a3-21e3-4a37-a442-54cd4d4ff12e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:17:55.508437 master-0 kubenswrapper[29458]: I0308 22:17:55.508353 29458 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.508900 master-0 kubenswrapper[29458]: I0308 22:17:55.508873 29458 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/477076a3-21e3-4a37-a442-54cd4d4ff12e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.509115 master-0 kubenswrapper[29458]: I0308 22:17:55.509057 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqlz5\" (UniqueName: \"kubernetes.io/projected/477076a3-21e3-4a37-a442-54cd4d4ff12e-kube-api-access-fqlz5\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.509261 master-0 kubenswrapper[29458]: I0308 22:17:55.509239 29458 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.509390 master-0 kubenswrapper[29458]: I0308 22:17:55.509367 29458 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/477076a3-21e3-4a37-a442-54cd4d4ff12e-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:17:55.992825 master-0 kubenswrapper[29458]: I0308 22:17:55.992744 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-798c96757f-zln5h_477076a3-21e3-4a37-a442-54cd4d4ff12e/console/1.log" Mar 08 22:17:55.993746 master-0 kubenswrapper[29458]: I0308 22:17:55.992856 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-798c96757f-zln5h" event={"ID":"477076a3-21e3-4a37-a442-54cd4d4ff12e","Type":"ContainerDied","Data":"15bedef1821aaf15c54b7aa14c5678510db82a1a5b352b252d708d77f4fc038e"} Mar 08 22:17:55.993746 master-0 kubenswrapper[29458]: I0308 22:17:55.992912 29458 scope.go:117] "RemoveContainer" containerID="790255ffe12cb0e20e058053bac6dd672a26823a7a323e3ac6c9ba2431bd075b" Mar 08 22:17:55.993746 master-0 kubenswrapper[29458]: I0308 22:17:55.993168 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-798c96757f-zln5h" Mar 08 22:17:56.051033 master-0 kubenswrapper[29458]: I0308 22:17:56.050932 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-798c96757f-zln5h"] Mar 08 22:17:56.061545 master-0 kubenswrapper[29458]: I0308 22:17:56.061442 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-798c96757f-zln5h"] Mar 08 22:17:56.988159 master-0 kubenswrapper[29458]: I0308 22:17:56.988032 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" path="/var/lib/kubelet/pods/477076a3-21e3-4a37-a442-54cd4d4ff12e/volumes" Mar 08 22:17:58.303668 master-0 kubenswrapper[29458]: I0308 22:17:58.303574 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:17:58.311688 master-0 kubenswrapper[29458]: I0308 22:17:58.309492 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:18:01.543818 master-0 kubenswrapper[29458]: I0308 22:18:01.543664 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:18:01.543818 master-0 kubenswrapper[29458]: I0308 22:18:01.543804 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:18:01.552526 master-0 kubenswrapper[29458]: I0308 22:18:01.552479 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:18:02.067726 master-0 kubenswrapper[29458]: I0308 22:18:02.067647 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:18:02.154214 master-0 kubenswrapper[29458]: I0308 22:18:02.154022 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cbcdbfdc5-b5crv"] Mar 08 22:18:11.693506 master-0 kubenswrapper[29458]: I0308 22:18:11.693435 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 08 22:18:11.694268 master-0 kubenswrapper[29458]: E0308 22:18:11.693831 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" Mar 08 22:18:11.694268 master-0 kubenswrapper[29458]: I0308 22:18:11.693854 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" Mar 08 22:18:11.694268 master-0 kubenswrapper[29458]: I0308 22:18:11.694097 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" Mar 08 22:18:11.694268 master-0 kubenswrapper[29458]: I0308 22:18:11.694124 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" Mar 08 22:18:11.694268 master-0 kubenswrapper[29458]: E0308 22:18:11.694256 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" Mar 08 22:18:11.694268 master-0 kubenswrapper[29458]: I0308 22:18:11.694267 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="477076a3-21e3-4a37-a442-54cd4d4ff12e" containerName="console" Mar 08 22:18:11.698737 master-0 kubenswrapper[29458]: I0308 22:18:11.698690 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.702139 master-0 kubenswrapper[29458]: I0308 22:18:11.702027 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 08 22:18:11.702262 master-0 kubenswrapper[29458]: I0308 22:18:11.702213 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 08 22:18:11.702307 master-0 kubenswrapper[29458]: I0308 22:18:11.702255 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 08 22:18:11.702632 master-0 kubenswrapper[29458]: I0308 22:18:11.702589 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 08 22:18:11.704961 master-0 kubenswrapper[29458]: I0308 22:18:11.704928 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 08 22:18:11.705411 master-0 kubenswrapper[29458]: I0308 22:18:11.705385 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 08 22:18:11.711202 master-0 kubenswrapper[29458]: I0308 22:18:11.711163 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 08 22:18:11.719148 master-0 kubenswrapper[29458]: I0308 22:18:11.717434 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 08 22:18:11.735199 master-0 kubenswrapper[29458]: I0308 22:18:11.733656 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829644 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-web-config\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829722 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea001c11-5075-4318-8897-413d37e872ec-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829748 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ea001c11-5075-4318-8897-413d37e872ec-config-out\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829764 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6w8\" (UniqueName: \"kubernetes.io/projected/ea001c11-5075-4318-8897-413d37e872ec-kube-api-access-fn6w8\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829788 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea001c11-5075-4318-8897-413d37e872ec-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829823 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829850 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.829839 master-0 kubenswrapper[29458]: I0308 22:18:11.829870 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.830494 master-0 kubenswrapper[29458]: I0308 22:18:11.829893 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.830494 master-0 kubenswrapper[29458]: I0308 22:18:11.829911 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-config-volume\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.830494 master-0 kubenswrapper[29458]: I0308 22:18:11.829967 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ea001c11-5075-4318-8897-413d37e872ec-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.830494 master-0 kubenswrapper[29458]: I0308 22:18:11.830312 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ea001c11-5075-4318-8897-413d37e872ec-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.933848 master-0 kubenswrapper[29458]: I0308 22:18:11.933751 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934224 master-0 kubenswrapper[29458]: I0308 22:18:11.933933 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934224 master-0 kubenswrapper[29458]: I0308 22:18:11.933972 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934224 master-0 kubenswrapper[29458]: I0308 22:18:11.934007 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934421 master-0 kubenswrapper[29458]: I0308 22:18:11.934364 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-config-volume\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934477 master-0 kubenswrapper[29458]: I0308 22:18:11.934443 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ea001c11-5075-4318-8897-413d37e872ec-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934804 master-0 kubenswrapper[29458]: I0308 22:18:11.934760 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ea001c11-5075-4318-8897-413d37e872ec-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934907 master-0 kubenswrapper[29458]: I0308 22:18:11.934876 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-web-config\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.934990 master-0 kubenswrapper[29458]: I0308 22:18:11.934963 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea001c11-5075-4318-8897-413d37e872ec-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.935151 master-0 kubenswrapper[29458]: I0308 22:18:11.935115 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ea001c11-5075-4318-8897-413d37e872ec-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.935226 master-0 kubenswrapper[29458]: I0308 22:18:11.935175 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ea001c11-5075-4318-8897-413d37e872ec-config-out\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.935283 master-0 kubenswrapper[29458]: I0308 22:18:11.935241 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6w8\" (UniqueName: \"kubernetes.io/projected/ea001c11-5075-4318-8897-413d37e872ec-kube-api-access-fn6w8\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.935332 master-0 kubenswrapper[29458]: I0308 22:18:11.935311 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea001c11-5075-4318-8897-413d37e872ec-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.936759 master-0 kubenswrapper[29458]: I0308 22:18:11.936684 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea001c11-5075-4318-8897-413d37e872ec-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.936855 master-0 kubenswrapper[29458]: I0308 22:18:11.936783 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea001c11-5075-4318-8897-413d37e872ec-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.939457 master-0 kubenswrapper[29458]: I0308 22:18:11.938938 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ea001c11-5075-4318-8897-413d37e872ec-config-out\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.939457 master-0 kubenswrapper[29458]: I0308 22:18:11.939374 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.941448 master-0 kubenswrapper[29458]: I0308 22:18:11.940040 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.941448 master-0 kubenswrapper[29458]: I0308 22:18:11.940206 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ea001c11-5075-4318-8897-413d37e872ec-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.949148 master-0 kubenswrapper[29458]: I0308 22:18:11.948594 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-web-config\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.949148 master-0 kubenswrapper[29458]: I0308 22:18:11.948966 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.949556 master-0 kubenswrapper[29458]: I0308 22:18:11.949419 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.949638 master-0 kubenswrapper[29458]: I0308 22:18:11.949581 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ea001c11-5075-4318-8897-413d37e872ec-config-volume\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:11.956765 master-0 kubenswrapper[29458]: I0308 22:18:11.956266 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6w8\" (UniqueName: \"kubernetes.io/projected/ea001c11-5075-4318-8897-413d37e872ec-kube-api-access-fn6w8\") pod \"alertmanager-main-0\" (UID: \"ea001c11-5075-4318-8897-413d37e872ec\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:12.016491 master-0 kubenswrapper[29458]: I0308 22:18:12.016386 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 08 22:18:12.554595 master-0 kubenswrapper[29458]: I0308 22:18:12.554516 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 08 22:18:12.692390 master-0 kubenswrapper[29458]: I0308 22:18:12.692289 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7768d84bb4-7rwb2"] Mar 08 22:18:12.695105 master-0 kubenswrapper[29458]: I0308 22:18:12.695020 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.699764 master-0 kubenswrapper[29458]: I0308 22:18:12.698162 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 08 22:18:12.699764 master-0 kubenswrapper[29458]: I0308 22:18:12.698735 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 08 22:18:12.699764 master-0 kubenswrapper[29458]: I0308 22:18:12.699031 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9i0q8ispj481v" Mar 08 22:18:12.699764 master-0 kubenswrapper[29458]: I0308 22:18:12.699293 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 08 22:18:12.699764 master-0 kubenswrapper[29458]: I0308 22:18:12.699581 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 08 22:18:12.699764 master-0 kubenswrapper[29458]: I0308 22:18:12.699626 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 08 22:18:12.719385 master-0 kubenswrapper[29458]: I0308 22:18:12.719322 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7768d84bb4-7rwb2"] Mar 08 22:18:12.862529 master-0 kubenswrapper[29458]: I0308 22:18:12.862467 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbqb\" (UniqueName: \"kubernetes.io/projected/1ed6ab94-28e5-4f78-b579-01ded8462737-kube-api-access-xfbqb\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862550 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862591 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-grpc-tls\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862657 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862684 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed6ab94-28e5-4f78-b579-01ded8462737-metrics-client-ca\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862711 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-tls\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862736 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.862828 master-0 kubenswrapper[29458]: I0308 22:18:12.862810 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.964647 master-0 kubenswrapper[29458]: I0308 22:18:12.964570 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.964647 master-0 kubenswrapper[29458]: I0308 22:18:12.964627 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed6ab94-28e5-4f78-b579-01ded8462737-metrics-client-ca\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.964647 master-0 kubenswrapper[29458]: I0308 22:18:12.964652 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-tls\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.964998 master-0 kubenswrapper[29458]: I0308 22:18:12.964671 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.964998 master-0 kubenswrapper[29458]: I0308 22:18:12.964936 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.965141 master-0 kubenswrapper[29458]: I0308 22:18:12.965106 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbqb\" (UniqueName: \"kubernetes.io/projected/1ed6ab94-28e5-4f78-b579-01ded8462737-kube-api-access-xfbqb\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.965211 master-0 kubenswrapper[29458]: I0308 22:18:12.965176 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.965257 master-0 kubenswrapper[29458]: I0308 22:18:12.965233 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-grpc-tls\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.966258 master-0 kubenswrapper[29458]: I0308 22:18:12.966217 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed6ab94-28e5-4f78-b579-01ded8462737-metrics-client-ca\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.968670 master-0 kubenswrapper[29458]: I0308 22:18:12.968625 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.969029 master-0 kubenswrapper[29458]: I0308 22:18:12.969001 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-tls\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.969646 master-0 kubenswrapper[29458]: I0308 22:18:12.969596 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.971156 master-0 kubenswrapper[29458]: I0308 22:18:12.970881 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-grpc-tls\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.971156 master-0 kubenswrapper[29458]: I0308 22:18:12.971066 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.971486 master-0 kubenswrapper[29458]: I0308 22:18:12.971445 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ed6ab94-28e5-4f78-b579-01ded8462737-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:12.985985 master-0 kubenswrapper[29458]: I0308 22:18:12.985877 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbqb\" (UniqueName: \"kubernetes.io/projected/1ed6ab94-28e5-4f78-b579-01ded8462737-kube-api-access-xfbqb\") pod \"thanos-querier-7768d84bb4-7rwb2\" (UID: \"1ed6ab94-28e5-4f78-b579-01ded8462737\") " pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:13.023140 master-0 kubenswrapper[29458]: I0308 22:18:13.022996 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:13.186557 master-0 kubenswrapper[29458]: I0308 22:18:13.186468 29458 generic.go:334] "Generic (PLEG): container finished" podID="ea001c11-5075-4318-8897-413d37e872ec" containerID="53120e6df4b54873ffca05b70d246b9e1ec9c405a318f810ad0bb82b082562ca" exitCode=0 Mar 08 22:18:13.186731 master-0 kubenswrapper[29458]: I0308 22:18:13.186524 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerDied","Data":"53120e6df4b54873ffca05b70d246b9e1ec9c405a318f810ad0bb82b082562ca"} Mar 08 22:18:13.186790 master-0 kubenswrapper[29458]: I0308 22:18:13.186756 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"bb2a9233a55c17cf4ab7a7e4de9405b2dea14e4abe74f1f2fac2e7cc5300d89e"} Mar 08 22:18:13.530116 master-0 kubenswrapper[29458]: I0308 22:18:13.527694 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7768d84bb4-7rwb2"] Mar 08 22:18:13.530116 master-0 kubenswrapper[29458]: W0308 22:18:13.528189 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed6ab94_28e5_4f78_b579_01ded8462737.slice/crio-6a4dce5b9dfadbdcdd229c7168266147dcb625b6ba58826363404e61fafafb1a WatchSource:0}: Error finding container 6a4dce5b9dfadbdcdd229c7168266147dcb625b6ba58826363404e61fafafb1a: Status 404 returned error can't find the container with id 6a4dce5b9dfadbdcdd229c7168266147dcb625b6ba58826363404e61fafafb1a Mar 08 22:18:14.197821 master-0 kubenswrapper[29458]: I0308 22:18:14.197762 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"6a4dce5b9dfadbdcdd229c7168266147dcb625b6ba58826363404e61fafafb1a"} Mar 08 22:18:15.416731 master-0 kubenswrapper[29458]: I0308 22:18:15.416645 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7784d6fc57-xrnjf"] Mar 08 22:18:15.418205 master-0 kubenswrapper[29458]: I0308 22:18:15.417998 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.424859 master-0 kubenswrapper[29458]: I0308 22:18:15.424782 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ccusvhigb3u45" Mar 08 22:18:15.432621 master-0 kubenswrapper[29458]: I0308 22:18:15.432552 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-f5876b8d7-2222x"] Mar 08 22:18:15.433043 master-0 kubenswrapper[29458]: I0308 22:18:15.432894 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" podUID="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" containerName="metrics-server" containerID="cri-o://43a9d4a149475717fa1ef3d37fbaab396886033829072b529898dcdefcf58e78" gracePeriod=170 Mar 08 22:18:15.446250 master-0 kubenswrapper[29458]: I0308 22:18:15.446158 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7784d6fc57-xrnjf"] Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514014 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-secret-metrics-client-certs\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514214 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1db22489-423e-40df-a153-5f027a65738e-metrics-server-audit-profiles\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514258 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-client-ca-bundle\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514304 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-secret-metrics-server-tls\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514343 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1db22489-423e-40df-a153-5f027a65738e-audit-log\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514367 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db22489-423e-40df-a153-5f027a65738e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.514957 master-0 kubenswrapper[29458]: I0308 22:18:15.514523 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5thg\" (UniqueName: \"kubernetes.io/projected/1db22489-423e-40df-a153-5f027a65738e-kube-api-access-x5thg\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.615918 master-0 kubenswrapper[29458]: I0308 22:18:15.615831 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-secret-metrics-client-certs\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.615918 master-0 kubenswrapper[29458]: I0308 22:18:15.615921 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1db22489-423e-40df-a153-5f027a65738e-metrics-server-audit-profiles\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.616293 master-0 kubenswrapper[29458]: I0308 22:18:15.615970 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-client-ca-bundle\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.616293 master-0 kubenswrapper[29458]: I0308 22:18:15.616022 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-secret-metrics-server-tls\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.620640 master-0 kubenswrapper[29458]: I0308 22:18:15.620577 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1db22489-423e-40df-a153-5f027a65738e-metrics-server-audit-profiles\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.620761 master-0 kubenswrapper[29458]: I0308 22:18:15.620689 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1db22489-423e-40df-a153-5f027a65738e-audit-log\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.620818 master-0 kubenswrapper[29458]: I0308 22:18:15.620785 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db22489-423e-40df-a153-5f027a65738e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.620873 master-0 kubenswrapper[29458]: I0308 22:18:15.620825 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5thg\" (UniqueName: \"kubernetes.io/projected/1db22489-423e-40df-a153-5f027a65738e-kube-api-access-x5thg\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.621205 master-0 kubenswrapper[29458]: I0308 22:18:15.621173 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1db22489-423e-40df-a153-5f027a65738e-audit-log\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.622042 master-0 kubenswrapper[29458]: I0308 22:18:15.621922 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db22489-423e-40df-a153-5f027a65738e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.624416 master-0 kubenswrapper[29458]: I0308 22:18:15.624372 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-client-ca-bundle\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.624894 master-0 kubenswrapper[29458]: I0308 22:18:15.624868 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-secret-metrics-server-tls\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.627309 master-0 kubenswrapper[29458]: I0308 22:18:15.627271 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1db22489-423e-40df-a153-5f027a65738e-secret-metrics-client-certs\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.641456 master-0 kubenswrapper[29458]: I0308 22:18:15.641396 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5thg\" (UniqueName: \"kubernetes.io/projected/1db22489-423e-40df-a153-5f027a65738e-kube-api-access-x5thg\") pod \"metrics-server-7784d6fc57-xrnjf\" (UID: \"1db22489-423e-40df-a153-5f027a65738e\") " pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:15.759683 master-0 kubenswrapper[29458]: I0308 22:18:15.758322 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:16.189252 master-0 kubenswrapper[29458]: I0308 22:18:16.189169 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-695dfc9f84-n5pqv" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" containerID="cri-o://4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164" gracePeriod=15 Mar 08 22:18:16.525716 master-0 kubenswrapper[29458]: I0308 22:18:16.525516 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7784d6fc57-xrnjf"] Mar 08 22:18:16.538226 master-0 kubenswrapper[29458]: W0308 22:18:16.538174 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db22489_423e_40df_a153_5f027a65738e.slice/crio-837499dbbda7c495488fecf96e9c543eeb4bb513a2458c95881c058a299f5397 WatchSource:0}: Error finding container 837499dbbda7c495488fecf96e9c543eeb4bb513a2458c95881c058a299f5397: Status 404 returned error can't find the container with id 837499dbbda7c495488fecf96e9c543eeb4bb513a2458c95881c058a299f5397 Mar 08 22:18:16.667975 master-0 kubenswrapper[29458]: I0308 22:18:16.667935 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-695dfc9f84-n5pqv_85d1ad38-c1b6-4fc4-a852-703ba6171ca3/console/0.log" Mar 08 22:18:16.668110 master-0 kubenswrapper[29458]: I0308 22:18:16.668011 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:18:16.740339 master-0 kubenswrapper[29458]: I0308 22:18:16.740285 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjd9f\" (UniqueName: \"kubernetes.io/projected/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-kube-api-access-qjd9f\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.740435 master-0 kubenswrapper[29458]: I0308 22:18:16.740344 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-oauth-serving-cert\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.740435 master-0 kubenswrapper[29458]: I0308 22:18:16.740369 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-oauth-config\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.740435 master-0 kubenswrapper[29458]: I0308 22:18:16.740413 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-service-ca\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.740915 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-serving-cert\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.740971 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-service-ca" (OuterVolumeSpecName: "service-ca") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.741119 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-trusted-ca-bundle\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.741157 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-config\") pod \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\" (UID: \"85d1ad38-c1b6-4fc4-a852-703ba6171ca3\") " Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.741328 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.741698 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.741727 29458 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:16.741763 master-0 kubenswrapper[29458]: I0308 22:18:16.741741 29458 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:16.742035 master-0 kubenswrapper[29458]: I0308 22:18:16.741928 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-config" (OuterVolumeSpecName: "console-config") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:16.744681 master-0 kubenswrapper[29458]: I0308 22:18:16.744623 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:18:16.747586 master-0 kubenswrapper[29458]: I0308 22:18:16.747187 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:18:16.747586 master-0 kubenswrapper[29458]: I0308 22:18:16.747229 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-kube-api-access-qjd9f" (OuterVolumeSpecName: "kube-api-access-qjd9f") pod "85d1ad38-c1b6-4fc4-a852-703ba6171ca3" (UID: "85d1ad38-c1b6-4fc4-a852-703ba6171ca3"). InnerVolumeSpecName "kube-api-access-qjd9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:18:16.842833 master-0 kubenswrapper[29458]: I0308 22:18:16.842746 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjd9f\" (UniqueName: \"kubernetes.io/projected/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-kube-api-access-qjd9f\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:16.842833 master-0 kubenswrapper[29458]: I0308 22:18:16.842781 29458 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:16.842833 master-0 kubenswrapper[29458]: I0308 22:18:16.842797 29458 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:16.842833 master-0 kubenswrapper[29458]: I0308 22:18:16.842806 29458 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:16.842833 master-0 kubenswrapper[29458]: I0308 22:18:16.842816 29458 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85d1ad38-c1b6-4fc4-a852-703ba6171ca3-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:17.118150 master-0 kubenswrapper[29458]: I0308 22:18:17.117756 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 08 22:18:17.118442 master-0 kubenswrapper[29458]: E0308 22:18:17.118201 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" Mar 08 22:18:17.118442 master-0 kubenswrapper[29458]: I0308 22:18:17.118219 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" Mar 08 22:18:17.118545 master-0 kubenswrapper[29458]: I0308 22:18:17.118487 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerName="console" Mar 08 22:18:17.121090 master-0 kubenswrapper[29458]: I0308 22:18:17.121033 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.127796 master-0 kubenswrapper[29458]: I0308 22:18:17.127026 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 08 22:18:17.127796 master-0 kubenswrapper[29458]: I0308 22:18:17.127304 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 08 22:18:17.127796 master-0 kubenswrapper[29458]: I0308 22:18:17.127377 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 08 22:18:17.128359 master-0 kubenswrapper[29458]: I0308 22:18:17.127810 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 08 22:18:17.128359 master-0 kubenswrapper[29458]: I0308 22:18:17.128039 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 08 22:18:17.128359 master-0 kubenswrapper[29458]: I0308 22:18:17.128090 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 08 22:18:17.128359 master-0 kubenswrapper[29458]: I0308 22:18:17.128207 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 08 22:18:17.128359 master-0 kubenswrapper[29458]: I0308 22:18:17.128305 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1jemgj19238gd" Mar 08 22:18:17.130447 master-0 kubenswrapper[29458]: I0308 22:18:17.129826 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 08 22:18:17.130731 master-0 kubenswrapper[29458]: I0308 22:18:17.130611 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 08 22:18:17.147342 master-0 kubenswrapper[29458]: I0308 22:18:17.146447 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 08 22:18:17.147793 master-0 kubenswrapper[29458]: I0308 22:18:17.147281 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 08 22:18:17.154013 master-0 kubenswrapper[29458]: I0308 22:18:17.153967 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 08 22:18:17.247604 master-0 kubenswrapper[29458]: I0308 22:18:17.247545 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"564f1792e10d3f0235946b890aa6e0d2973615dff33af00f5d18d609f83503e2"} Mar 08 22:18:17.247604 master-0 kubenswrapper[29458]: I0308 22:18:17.247606 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"e6ff21d345fda7eee51ec57d7e726bb97a3a137b78350a69f29bbc5c2a679c9e"} Mar 08 22:18:17.247604 master-0 kubenswrapper[29458]: I0308 22:18:17.247620 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"4c62f0d05d0de845e2b7ae0818162372e6e9b0a4008599886cf88e13de56b254"} Mar 08 22:18:17.248478 master-0 kubenswrapper[29458]: I0308 22:18:17.248454 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-web-config\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.248600 master-0 kubenswrapper[29458]: I0308 22:18:17.248579 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.248716 master-0 kubenswrapper[29458]: I0308 22:18:17.248696 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-config\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.248832 master-0 kubenswrapper[29458]: I0308 22:18:17.248725 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-695dfc9f84-n5pqv_85d1ad38-c1b6-4fc4-a852-703ba6171ca3/console/0.log" Mar 08 22:18:17.248899 master-0 kubenswrapper[29458]: I0308 22:18:17.248847 29458 generic.go:334] "Generic (PLEG): container finished" podID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" containerID="4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164" exitCode=2 Mar 08 22:18:17.248997 master-0 kubenswrapper[29458]: I0308 22:18:17.248950 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-695dfc9f84-n5pqv" Mar 08 22:18:17.249063 master-0 kubenswrapper[29458]: I0308 22:18:17.248817 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.249256 master-0 kubenswrapper[29458]: I0308 22:18:17.249237 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.249356 master-0 kubenswrapper[29458]: I0308 22:18:17.249339 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46rzz\" (UniqueName: \"kubernetes.io/projected/420ccfbd-ab7a-401c-93ae-6658805c8e78-kube-api-access-46rzz\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.249467 master-0 kubenswrapper[29458]: I0308 22:18:17.249449 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.249592 master-0 kubenswrapper[29458]: I0308 22:18:17.249568 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695dfc9f84-n5pqv" event={"ID":"85d1ad38-c1b6-4fc4-a852-703ba6171ca3","Type":"ContainerDied","Data":"4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164"} Mar 08 22:18:17.249654 master-0 kubenswrapper[29458]: I0308 22:18:17.249597 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-695dfc9f84-n5pqv" event={"ID":"85d1ad38-c1b6-4fc4-a852-703ba6171ca3","Type":"ContainerDied","Data":"d351772f0daf236d6c0bef90f3b6ff8dcc2b25792df1488b1f028ed4d53e79b7"} Mar 08 22:18:17.249654 master-0 kubenswrapper[29458]: I0308 22:18:17.249619 29458 scope.go:117] "RemoveContainer" containerID="4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164" Mar 08 22:18:17.249752 master-0 kubenswrapper[29458]: I0308 22:18:17.249575 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.249845 master-0 kubenswrapper[29458]: I0308 22:18:17.249828 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/420ccfbd-ab7a-401c-93ae-6658805c8e78-config-out\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.249955 master-0 kubenswrapper[29458]: I0308 22:18:17.249935 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250105 master-0 kubenswrapper[29458]: I0308 22:18:17.250087 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250228 master-0 kubenswrapper[29458]: I0308 22:18:17.250200 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/420ccfbd-ab7a-401c-93ae-6658805c8e78-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250325 master-0 kubenswrapper[29458]: I0308 22:18:17.250249 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250325 master-0 kubenswrapper[29458]: I0308 22:18:17.250280 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250325 master-0 kubenswrapper[29458]: I0308 22:18:17.250306 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250566 master-0 kubenswrapper[29458]: I0308 22:18:17.250331 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250566 master-0 kubenswrapper[29458]: I0308 22:18:17.250356 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.250566 master-0 kubenswrapper[29458]: I0308 22:18:17.250395 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.251855 master-0 kubenswrapper[29458]: I0308 22:18:17.251757 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" event={"ID":"1db22489-423e-40df-a153-5f027a65738e","Type":"ContainerStarted","Data":"3c74d4ca7b815ef98966abb513999f65d56cd92fb93db3810c14b96208c51595"} Mar 08 22:18:17.251855 master-0 kubenswrapper[29458]: I0308 22:18:17.251839 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" event={"ID":"1db22489-423e-40df-a153-5f027a65738e","Type":"ContainerStarted","Data":"837499dbbda7c495488fecf96e9c543eeb4bb513a2458c95881c058a299f5397"} Mar 08 22:18:17.256666 master-0 kubenswrapper[29458]: I0308 22:18:17.256609 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"abd75f9aa7be900af4b60c528db4f88db31fd1c42f8d09ffa2b780728ae48772"} Mar 08 22:18:17.256867 master-0 kubenswrapper[29458]: I0308 22:18:17.256840 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"53beabf0d02e68a2ab7f1ff22356ab0e5e27d10f723b2caa170fd0bb66170a17"} Mar 08 22:18:17.257041 master-0 kubenswrapper[29458]: I0308 22:18:17.257017 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"ff6705d1e7414fe471307036e4902b06d54a208728b75af7a71e3f8acc4f0573"} Mar 08 22:18:17.257217 master-0 kubenswrapper[29458]: I0308 22:18:17.257200 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"89cebe4ddafc0c92ec14b972b4be6d15f1e9bf9f2eac12376161e97a8e09b3d0"} Mar 08 22:18:17.257323 master-0 kubenswrapper[29458]: I0308 22:18:17.257310 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"0a2af5ccb5cdf0ba4efb92751ea1d20772bbf7c2dd8cc26853414b58203d55be"} Mar 08 22:18:17.280213 master-0 kubenswrapper[29458]: I0308 22:18:17.279068 29458 scope.go:117] "RemoveContainer" containerID="4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164" Mar 08 22:18:17.283234 master-0 kubenswrapper[29458]: E0308 22:18:17.282613 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164\": container with ID starting with 4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164 not found: ID does not exist" containerID="4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164" Mar 08 22:18:17.283234 master-0 kubenswrapper[29458]: I0308 22:18:17.282684 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164"} err="failed to get container status \"4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164\": rpc error: code = NotFound desc = could not find container \"4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164\": container with ID starting with 4bd4aa9baa387b392b4df50b79bd9b96db8ebc9695e43440320eac9e89b07164 not found: ID does not exist" Mar 08 22:18:17.285102 master-0 kubenswrapper[29458]: I0308 22:18:17.285022 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" podStartSLOduration=2.285007001 podStartE2EDuration="2.285007001s" podCreationTimestamp="2026-03-08 22:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:18:17.276837386 +0000 UTC m=+266.564894978" watchObservedRunningTime="2026-03-08 22:18:17.285007001 +0000 UTC m=+266.573064593" Mar 08 22:18:17.300892 master-0 kubenswrapper[29458]: I0308 22:18:17.300843 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-695dfc9f84-n5pqv"] Mar 08 22:18:17.316172 master-0 kubenswrapper[29458]: I0308 22:18:17.315676 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-695dfc9f84-n5pqv"] Mar 08 22:18:17.352462 master-0 kubenswrapper[29458]: I0308 22:18:17.352330 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/420ccfbd-ab7a-401c-93ae-6658805c8e78-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352462 master-0 kubenswrapper[29458]: I0308 22:18:17.352401 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352462 master-0 kubenswrapper[29458]: I0308 22:18:17.352430 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352462 master-0 kubenswrapper[29458]: I0308 22:18:17.352461 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352760 master-0 kubenswrapper[29458]: I0308 22:18:17.352486 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352760 master-0 kubenswrapper[29458]: I0308 22:18:17.352507 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352760 master-0 kubenswrapper[29458]: I0308 22:18:17.352584 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352760 master-0 kubenswrapper[29458]: I0308 22:18:17.352626 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-web-config\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352760 master-0 kubenswrapper[29458]: I0308 22:18:17.352644 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.352760 master-0 kubenswrapper[29458]: I0308 22:18:17.352666 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-config\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.354767 master-0 kubenswrapper[29458]: I0308 22:18:17.354714 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.354977 master-0 kubenswrapper[29458]: I0308 22:18:17.354945 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355033 master-0 kubenswrapper[29458]: I0308 22:18:17.355008 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46rzz\" (UniqueName: \"kubernetes.io/projected/420ccfbd-ab7a-401c-93ae-6658805c8e78-kube-api-access-46rzz\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355033 master-0 kubenswrapper[29458]: I0308 22:18:17.355018 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355143 master-0 kubenswrapper[29458]: I0308 22:18:17.355097 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355143 master-0 kubenswrapper[29458]: I0308 22:18:17.355127 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355209 master-0 kubenswrapper[29458]: I0308 22:18:17.355182 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/420ccfbd-ab7a-401c-93ae-6658805c8e78-config-out\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355239 master-0 kubenswrapper[29458]: I0308 22:18:17.355221 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355666 master-0 kubenswrapper[29458]: I0308 22:18:17.355277 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.355855 master-0 kubenswrapper[29458]: I0308 22:18:17.355827 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.356411 master-0 kubenswrapper[29458]: I0308 22:18:17.356375 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.357600 master-0 kubenswrapper[29458]: I0308 22:18:17.357573 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-config\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.358601 master-0 kubenswrapper[29458]: I0308 22:18:17.358267 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.358601 master-0 kubenswrapper[29458]: I0308 22:18:17.358552 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.359118 master-0 kubenswrapper[29458]: I0308 22:18:17.359051 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/420ccfbd-ab7a-401c-93ae-6658805c8e78-config-out\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.359540 master-0 kubenswrapper[29458]: I0308 22:18:17.359488 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.361468 master-0 kubenswrapper[29458]: I0308 22:18:17.360711 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.361468 master-0 kubenswrapper[29458]: I0308 22:18:17.360873 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/420ccfbd-ab7a-401c-93ae-6658805c8e78-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.361468 master-0 kubenswrapper[29458]: I0308 22:18:17.361159 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-web-config\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.361468 master-0 kubenswrapper[29458]: I0308 22:18:17.361319 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.361468 master-0 kubenswrapper[29458]: I0308 22:18:17.361423 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.366485 master-0 kubenswrapper[29458]: I0308 22:18:17.366449 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.366560 master-0 kubenswrapper[29458]: I0308 22:18:17.366514 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.366596 master-0 kubenswrapper[29458]: I0308 22:18:17.366574 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/420ccfbd-ab7a-401c-93ae-6658805c8e78-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.366999 master-0 kubenswrapper[29458]: I0308 22:18:17.366944 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/420ccfbd-ab7a-401c-93ae-6658805c8e78-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.373759 master-0 kubenswrapper[29458]: I0308 22:18:17.373720 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46rzz\" (UniqueName: \"kubernetes.io/projected/420ccfbd-ab7a-401c-93ae-6658805c8e78-kube-api-access-46rzz\") pod \"prometheus-k8s-0\" (UID: \"420ccfbd-ab7a-401c-93ae-6658805c8e78\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.451511 master-0 kubenswrapper[29458]: I0308 22:18:17.451424 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:17.918389 master-0 kubenswrapper[29458]: I0308 22:18:17.918317 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 08 22:18:18.075609 master-0 kubenswrapper[29458]: W0308 22:18:18.075548 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420ccfbd_ab7a_401c_93ae_6658805c8e78.slice/crio-8e8992cb0e6c04ed41949eca543c0b09161fef77d06a201ac342755184a64ff6 WatchSource:0}: Error finding container 8e8992cb0e6c04ed41949eca543c0b09161fef77d06a201ac342755184a64ff6: Status 404 returned error can't find the container with id 8e8992cb0e6c04ed41949eca543c0b09161fef77d06a201ac342755184a64ff6 Mar 08 22:18:18.274915 master-0 kubenswrapper[29458]: I0308 22:18:18.274840 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"8e8992cb0e6c04ed41949eca543c0b09161fef77d06a201ac342755184a64ff6"} Mar 08 22:18:18.989540 master-0 kubenswrapper[29458]: I0308 22:18:18.989308 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d1ad38-c1b6-4fc4-a852-703ba6171ca3" path="/var/lib/kubelet/pods/85d1ad38-c1b6-4fc4-a852-703ba6171ca3/volumes" Mar 08 22:18:19.292455 master-0 kubenswrapper[29458]: I0308 22:18:19.292289 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ea001c11-5075-4318-8897-413d37e872ec","Type":"ContainerStarted","Data":"2eff3dd5647187539e436635c7752847a3ba6e788d24ff37e890ca1ce9cc863a"} Mar 08 22:18:19.299669 master-0 kubenswrapper[29458]: I0308 22:18:19.299593 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"9082f2972e058f1c335f4ff769d96c5cbe7a42641bb082d1688f690c16b6b464"} Mar 08 22:18:19.299669 master-0 kubenswrapper[29458]: I0308 22:18:19.299666 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"846a98cabd547eb2e64ba2279bf43ab4df9034ed72a8feca442404fcc2c902ea"} Mar 08 22:18:19.299870 master-0 kubenswrapper[29458]: I0308 22:18:19.299682 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" event={"ID":"1ed6ab94-28e5-4f78-b579-01ded8462737","Type":"ContainerStarted","Data":"501efcda45edc9ec37e42db9ae45c7d65de0554bc9a5cff2baf151e08f159317"} Mar 08 22:18:19.300596 master-0 kubenswrapper[29458]: I0308 22:18:19.300560 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:19.302873 master-0 kubenswrapper[29458]: I0308 22:18:19.302828 29458 generic.go:334] "Generic (PLEG): container finished" podID="420ccfbd-ab7a-401c-93ae-6658805c8e78" containerID="3fea68b8245fea41d885448dd073ca2ca864c1bdb04847b8f713ba9ce82ed185" exitCode=0 Mar 08 22:18:19.302942 master-0 kubenswrapper[29458]: I0308 22:18:19.302871 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerDied","Data":"3fea68b8245fea41d885448dd073ca2ca864c1bdb04847b8f713ba9ce82ed185"} Mar 08 22:18:19.337297 master-0 kubenswrapper[29458]: I0308 22:18:19.333129 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.380181983 podStartE2EDuration="8.333105687s" podCreationTimestamp="2026-03-08 22:18:11 +0000 UTC" firstStartedPulling="2026-03-08 22:18:13.188856279 +0000 UTC m=+262.476913871" lastFinishedPulling="2026-03-08 22:18:18.141779973 +0000 UTC m=+267.429837575" observedRunningTime="2026-03-08 22:18:19.326178677 +0000 UTC m=+268.614236289" watchObservedRunningTime="2026-03-08 22:18:19.333105687 +0000 UTC m=+268.621163289" Mar 08 22:18:19.389452 master-0 kubenswrapper[29458]: I0308 22:18:19.388770 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" podStartSLOduration=2.783204433 podStartE2EDuration="7.388741013s" podCreationTimestamp="2026-03-08 22:18:12 +0000 UTC" firstStartedPulling="2026-03-08 22:18:13.534254638 +0000 UTC m=+262.822312270" lastFinishedPulling="2026-03-08 22:18:18.139791258 +0000 UTC m=+267.427848850" observedRunningTime="2026-03-08 22:18:19.372843097 +0000 UTC m=+268.660900719" watchObservedRunningTime="2026-03-08 22:18:19.388741013 +0000 UTC m=+268.676798615" Mar 08 22:18:21.330752 master-0 kubenswrapper[29458]: I0308 22:18:21.330688 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7768d84bb4-7rwb2" Mar 08 22:18:24.356605 master-0 kubenswrapper[29458]: I0308 22:18:24.356507 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"bd9a39f6b2564cdb8c0b54c4cb0a12a1d01878ed0599646c9411a5183fb052fd"} Mar 08 22:18:24.356605 master-0 kubenswrapper[29458]: I0308 22:18:24.356576 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"f3e0ebf34b88923c4c65612689458220f72f906f1f837789e27c16f28347e685"} Mar 08 22:18:24.356605 master-0 kubenswrapper[29458]: I0308 22:18:24.356597 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"b0749964a970eebfb1e99d5285fc6f194f9938acb710a1754ef744600b50a928"} Mar 08 22:18:24.356605 master-0 kubenswrapper[29458]: I0308 22:18:24.356613 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"29d6afa373d8f76707be29ed03d4d725d74899a3a717a8fff51b0ccaf5611cd0"} Mar 08 22:18:24.356605 master-0 kubenswrapper[29458]: I0308 22:18:24.356626 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"a0c2e0462c205a32c2cea0db73f352400ab682623ad16671ff20fabc44a63ba3"} Mar 08 22:18:24.356605 master-0 kubenswrapper[29458]: I0308 22:18:24.356640 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"420ccfbd-ab7a-401c-93ae-6658805c8e78","Type":"ContainerStarted","Data":"90b9a5bc7624a5fd37e4ef5ab6ab8749343f21eb930c39a32522eb02c49128f9"} Mar 08 22:18:24.486961 master-0 kubenswrapper[29458]: I0308 22:18:24.486863 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.587370287 podStartE2EDuration="7.48683835s" podCreationTimestamp="2026-03-08 22:18:17 +0000 UTC" firstStartedPulling="2026-03-08 22:18:19.304410029 +0000 UTC m=+268.592467621" lastFinishedPulling="2026-03-08 22:18:23.203878092 +0000 UTC m=+272.491935684" observedRunningTime="2026-03-08 22:18:24.483962901 +0000 UTC m=+273.772020493" watchObservedRunningTime="2026-03-08 22:18:24.48683835 +0000 UTC m=+273.774895942" Mar 08 22:18:27.226256 master-0 kubenswrapper[29458]: I0308 22:18:27.224871 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-cbcdbfdc5-b5crv" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerName="console" containerID="cri-o://91fac66385dbbbb506127846352f85f89c6d82af8337ab47935210b5bc8e9c1a" gracePeriod=15 Mar 08 22:18:27.388142 master-0 kubenswrapper[29458]: I0308 22:18:27.387315 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cbcdbfdc5-b5crv_4353e44c-f1db-4a07-b6bd-0feb86102961/console/0.log" Mar 08 22:18:27.388142 master-0 kubenswrapper[29458]: I0308 22:18:27.387396 29458 generic.go:334] "Generic (PLEG): container finished" podID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerID="91fac66385dbbbb506127846352f85f89c6d82af8337ab47935210b5bc8e9c1a" exitCode=2 Mar 08 22:18:27.388142 master-0 kubenswrapper[29458]: I0308 22:18:27.387445 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbcdbfdc5-b5crv" event={"ID":"4353e44c-f1db-4a07-b6bd-0feb86102961","Type":"ContainerDied","Data":"91fac66385dbbbb506127846352f85f89c6d82af8337ab47935210b5bc8e9c1a"} Mar 08 22:18:27.452136 master-0 kubenswrapper[29458]: I0308 22:18:27.452035 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:18:27.734500 master-0 kubenswrapper[29458]: I0308 22:18:27.734435 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cbcdbfdc5-b5crv_4353e44c-f1db-4a07-b6bd-0feb86102961/console/0.log" Mar 08 22:18:27.734615 master-0 kubenswrapper[29458]: I0308 22:18:27.734549 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:18:27.868557 master-0 kubenswrapper[29458]: I0308 22:18:27.868456 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-trusted-ca-bundle\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.868841 master-0 kubenswrapper[29458]: I0308 22:18:27.868683 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-oauth-config\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.869156 master-0 kubenswrapper[29458]: I0308 22:18:27.869106 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-console-config\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.869253 master-0 kubenswrapper[29458]: I0308 22:18:27.869185 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-service-ca\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.869431 master-0 kubenswrapper[29458]: I0308 22:18:27.869390 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pttg\" (UniqueName: \"kubernetes.io/projected/4353e44c-f1db-4a07-b6bd-0feb86102961-kube-api-access-2pttg\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.869740 master-0 kubenswrapper[29458]: I0308 22:18:27.869697 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-oauth-serving-cert\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.869845 master-0 kubenswrapper[29458]: I0308 22:18:27.869800 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-serving-cert\") pod \"4353e44c-f1db-4a07-b6bd-0feb86102961\" (UID: \"4353e44c-f1db-4a07-b6bd-0feb86102961\") " Mar 08 22:18:27.869930 master-0 kubenswrapper[29458]: I0308 22:18:27.869845 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:27.869988 master-0 kubenswrapper[29458]: I0308 22:18:27.869884 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-service-ca" (OuterVolumeSpecName: "service-ca") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:27.870339 master-0 kubenswrapper[29458]: I0308 22:18:27.870276 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:27.870876 master-0 kubenswrapper[29458]: I0308 22:18:27.870818 29458 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:27.870876 master-0 kubenswrapper[29458]: I0308 22:18:27.870857 29458 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:27.870876 master-0 kubenswrapper[29458]: I0308 22:18:27.870871 29458 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:27.871186 master-0 kubenswrapper[29458]: I0308 22:18:27.871109 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-console-config" (OuterVolumeSpecName: "console-config") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:18:27.873740 master-0 kubenswrapper[29458]: I0308 22:18:27.873674 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4353e44c-f1db-4a07-b6bd-0feb86102961-kube-api-access-2pttg" (OuterVolumeSpecName: "kube-api-access-2pttg") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "kube-api-access-2pttg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:18:27.874866 master-0 kubenswrapper[29458]: I0308 22:18:27.874754 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:18:27.877605 master-0 kubenswrapper[29458]: I0308 22:18:27.877552 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4353e44c-f1db-4a07-b6bd-0feb86102961" (UID: "4353e44c-f1db-4a07-b6bd-0feb86102961"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:18:27.972443 master-0 kubenswrapper[29458]: I0308 22:18:27.972207 29458 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:27.972443 master-0 kubenswrapper[29458]: I0308 22:18:27.972262 29458 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4353e44c-f1db-4a07-b6bd-0feb86102961-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:27.972443 master-0 kubenswrapper[29458]: I0308 22:18:27.972273 29458 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4353e44c-f1db-4a07-b6bd-0feb86102961-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:27.972443 master-0 kubenswrapper[29458]: I0308 22:18:27.972283 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pttg\" (UniqueName: \"kubernetes.io/projected/4353e44c-f1db-4a07-b6bd-0feb86102961-kube-api-access-2pttg\") on node \"master-0\" DevicePath \"\"" Mar 08 22:18:28.400479 master-0 kubenswrapper[29458]: I0308 22:18:28.400402 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cbcdbfdc5-b5crv_4353e44c-f1db-4a07-b6bd-0feb86102961/console/0.log" Mar 08 22:18:28.401430 master-0 kubenswrapper[29458]: I0308 22:18:28.400501 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbcdbfdc5-b5crv" event={"ID":"4353e44c-f1db-4a07-b6bd-0feb86102961","Type":"ContainerDied","Data":"f7dfc27814b9cf2910eff1e67360a97f697b2da6c5d4f804a7edc708dc7a9cff"} Mar 08 22:18:28.401430 master-0 kubenswrapper[29458]: I0308 22:18:28.400578 29458 scope.go:117] "RemoveContainer" containerID="91fac66385dbbbb506127846352f85f89c6d82af8337ab47935210b5bc8e9c1a" Mar 08 22:18:28.401430 master-0 kubenswrapper[29458]: I0308 22:18:28.400826 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbcdbfdc5-b5crv" Mar 08 22:18:28.470191 master-0 kubenswrapper[29458]: I0308 22:18:28.470043 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cbcdbfdc5-b5crv"] Mar 08 22:18:28.499156 master-0 kubenswrapper[29458]: I0308 22:18:28.499059 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-cbcdbfdc5-b5crv"] Mar 08 22:18:28.990733 master-0 kubenswrapper[29458]: I0308 22:18:28.990638 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" path="/var/lib/kubelet/pods/4353e44c-f1db-4a07-b6bd-0feb86102961/volumes" Mar 08 22:18:35.760277 master-0 kubenswrapper[29458]: I0308 22:18:35.760191 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:35.760277 master-0 kubenswrapper[29458]: I0308 22:18:35.760311 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:51.001618 master-0 kubenswrapper[29458]: I0308 22:18:51.001507 29458 kubelet.go:1505] "Image garbage collection succeeded" Mar 08 22:18:55.769420 master-0 kubenswrapper[29458]: I0308 22:18:55.769325 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:18:55.777409 master-0 kubenswrapper[29458]: I0308 22:18:55.777332 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7784d6fc57-xrnjf" Mar 08 22:19:17.452706 master-0 kubenswrapper[29458]: I0308 22:19:17.452564 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:19:17.505371 master-0 kubenswrapper[29458]: I0308 22:19:17.505257 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:19:17.944693 master-0 kubenswrapper[29458]: I0308 22:19:17.944594 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 22:19:39.539215 master-0 kubenswrapper[29458]: I0308 22:19:39.538978 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 08 22:19:39.540278 master-0 kubenswrapper[29458]: E0308 22:19:39.539304 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerName="console" Mar 08 22:19:39.540278 master-0 kubenswrapper[29458]: I0308 22:19:39.539318 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerName="console" Mar 08 22:19:39.540278 master-0 kubenswrapper[29458]: I0308 22:19:39.539512 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="4353e44c-f1db-4a07-b6bd-0feb86102961" containerName="console" Mar 08 22:19:39.540278 master-0 kubenswrapper[29458]: I0308 22:19:39.539973 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.542282 master-0 kubenswrapper[29458]: I0308 22:19:39.542232 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v7cvh" Mar 08 22:19:39.542983 master-0 kubenswrapper[29458]: I0308 22:19:39.542953 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 22:19:39.564486 master-0 kubenswrapper[29458]: I0308 22:19:39.564376 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 08 22:19:39.654964 master-0 kubenswrapper[29458]: I0308 22:19:39.654874 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.655272 master-0 kubenswrapper[29458]: I0308 22:19:39.655202 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.655311 master-0 kubenswrapper[29458]: I0308 22:19:39.655286 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-var-lock\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.757304 master-0 kubenswrapper[29458]: I0308 22:19:39.757231 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.757665 master-0 kubenswrapper[29458]: I0308 22:19:39.757393 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.757665 master-0 kubenswrapper[29458]: I0308 22:19:39.757486 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-var-lock\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.757665 master-0 kubenswrapper[29458]: I0308 22:19:39.757539 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-var-lock\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.757831 master-0 kubenswrapper[29458]: I0308 22:19:39.757786 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.777137 master-0 kubenswrapper[29458]: I0308 22:19:39.777041 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:39.860696 master-0 kubenswrapper[29458]: I0308 22:19:39.860595 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:19:40.349306 master-0 kubenswrapper[29458]: I0308 22:19:40.349224 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 08 22:19:40.355609 master-0 kubenswrapper[29458]: W0308 22:19:40.355543 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8c03808c_6ee1_4d20_8fc2_ef0100b43ee9.slice/crio-63e583f194f278a72f9d8b0f95d69a7dda35ce6ebe9778e45dc37b0408a29b7d WatchSource:0}: Error finding container 63e583f194f278a72f9d8b0f95d69a7dda35ce6ebe9778e45dc37b0408a29b7d: Status 404 returned error can't find the container with id 63e583f194f278a72f9d8b0f95d69a7dda35ce6ebe9778e45dc37b0408a29b7d Mar 08 22:19:41.145337 master-0 kubenswrapper[29458]: I0308 22:19:41.145253 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9","Type":"ContainerStarted","Data":"cceb6092fd4a35314cf77a9f3b96f8998b74ff3d55c6f5a1883d64b7d9cf8970"} Mar 08 22:19:41.145337 master-0 kubenswrapper[29458]: I0308 22:19:41.145342 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9","Type":"ContainerStarted","Data":"63e583f194f278a72f9d8b0f95d69a7dda35ce6ebe9778e45dc37b0408a29b7d"} Mar 08 22:19:41.169598 master-0 kubenswrapper[29458]: I0308 22:19:41.169496 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.16947609 podStartE2EDuration="2.16947609s" podCreationTimestamp="2026-03-08 22:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:19:41.167408703 +0000 UTC m=+350.455466295" watchObservedRunningTime="2026-03-08 22:19:41.16947609 +0000 UTC m=+350.457533682" Mar 08 22:20:13.639001 master-0 kubenswrapper[29458]: I0308 22:20:13.638905 29458 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:20:13.640162 master-0 kubenswrapper[29458]: I0308 22:20:13.639463 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="cluster-policy-controller" containerID="cri-o://24468252b1016ecbfc6fabcc842f03b85cc1d8d62ad0492983e2d43991a2cb4a" gracePeriod=30 Mar 08 22:20:13.640162 master-0 kubenswrapper[29458]: I0308 22:20:13.639751 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" containerID="cri-o://f3f780418e0dc78b1593ce2cd94d46df24ecbd7393affbd8ab7521d75f83183d" gracePeriod=30 Mar 08 22:20:13.640162 master-0 kubenswrapper[29458]: I0308 22:20:13.639862 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://713d5bb870be4b517e2a3b6934cbc3a8dbb4fb996bc551e64dbb0c038eff7f98" gracePeriod=30 Mar 08 22:20:13.640162 master-0 kubenswrapper[29458]: I0308 22:20:13.639962 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://15c38815310dffefa782d7e3b86b468eadf91008125f12d833ccabdf6a47990b" gracePeriod=30 Mar 08 22:20:13.644165 master-0 kubenswrapper[29458]: I0308 22:20:13.644042 29458 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: E0308 22:20:13.647538 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.647598 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: E0308 22:20:13.647652 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-cert-syncer" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.647665 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-cert-syncer" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: E0308 22:20:13.647701 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="cluster-policy-controller" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.647713 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="cluster-policy-controller" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: E0308 22:20:13.647735 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-recovery-controller" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.647751 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-recovery-controller" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.648152 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.648206 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-recovery-controller" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.648232 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager-cert-syncer" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.648258 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" Mar 08 22:20:13.648344 master-0 kubenswrapper[29458]: I0308 22:20:13.648353 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="cluster-policy-controller" Mar 08 22:20:13.648786 master-0 kubenswrapper[29458]: E0308 22:20:13.648643 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" Mar 08 22:20:13.648786 master-0 kubenswrapper[29458]: I0308 22:20:13.648669 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" containerName="kube-controller-manager" Mar 08 22:20:13.761928 master-0 kubenswrapper[29458]: I0308 22:20:13.761846 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b95a77eed40019e8cface8c31482bb18-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b95a77eed40019e8cface8c31482bb18\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.762166 master-0 kubenswrapper[29458]: I0308 22:20:13.762043 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b95a77eed40019e8cface8c31482bb18-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b95a77eed40019e8cface8c31482bb18\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.864153 master-0 kubenswrapper[29458]: I0308 22:20:13.863893 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b95a77eed40019e8cface8c31482bb18-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b95a77eed40019e8cface8c31482bb18\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.864153 master-0 kubenswrapper[29458]: I0308 22:20:13.864005 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b95a77eed40019e8cface8c31482bb18-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b95a77eed40019e8cface8c31482bb18\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.864153 master-0 kubenswrapper[29458]: I0308 22:20:13.864023 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b95a77eed40019e8cface8c31482bb18-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b95a77eed40019e8cface8c31482bb18\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.864153 master-0 kubenswrapper[29458]: I0308 22:20:13.864059 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b95a77eed40019e8cface8c31482bb18-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b95a77eed40019e8cface8c31482bb18\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.928126 master-0 kubenswrapper[29458]: I0308 22:20:13.927883 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager-cert-syncer/0.log" Mar 08 22:20:13.932757 master-0 kubenswrapper[29458]: I0308 22:20:13.932719 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager/0.log" Mar 08 22:20:13.933034 master-0 kubenswrapper[29458]: I0308 22:20:13.932809 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:13.936933 master-0 kubenswrapper[29458]: I0308 22:20:13.936879 29458 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7e4fb17aa6f4ce82697c1badb6e3e623" podUID="b95a77eed40019e8cface8c31482bb18" Mar 08 22:20:14.067168 master-0 kubenswrapper[29458]: I0308 22:20:14.066992 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") pod \"7e4fb17aa6f4ce82697c1badb6e3e623\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " Mar 08 22:20:14.067478 master-0 kubenswrapper[29458]: I0308 22:20:14.067207 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7e4fb17aa6f4ce82697c1badb6e3e623" (UID: "7e4fb17aa6f4ce82697c1badb6e3e623"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:20:14.067478 master-0 kubenswrapper[29458]: I0308 22:20:14.067298 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") pod \"7e4fb17aa6f4ce82697c1badb6e3e623\" (UID: \"7e4fb17aa6f4ce82697c1badb6e3e623\") " Mar 08 22:20:14.067478 master-0 kubenswrapper[29458]: I0308 22:20:14.067413 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7e4fb17aa6f4ce82697c1badb6e3e623" (UID: "7e4fb17aa6f4ce82697c1badb6e3e623"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:20:14.070818 master-0 kubenswrapper[29458]: I0308 22:20:14.070760 29458 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:14.070818 master-0 kubenswrapper[29458]: I0308 22:20:14.070824 29458 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7e4fb17aa6f4ce82697c1badb6e3e623-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:14.509403 master-0 kubenswrapper[29458]: I0308 22:20:14.509297 29458 generic.go:334] "Generic (PLEG): container finished" podID="8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" containerID="cceb6092fd4a35314cf77a9f3b96f8998b74ff3d55c6f5a1883d64b7d9cf8970" exitCode=0 Mar 08 22:20:14.509738 master-0 kubenswrapper[29458]: I0308 22:20:14.509440 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9","Type":"ContainerDied","Data":"cceb6092fd4a35314cf77a9f3b96f8998b74ff3d55c6f5a1883d64b7d9cf8970"} Mar 08 22:20:14.514296 master-0 kubenswrapper[29458]: I0308 22:20:14.514219 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager-cert-syncer/0.log" Mar 08 22:20:14.516011 master-0 kubenswrapper[29458]: I0308 22:20:14.515951 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager/0.log" Mar 08 22:20:14.516192 master-0 kubenswrapper[29458]: I0308 22:20:14.516043 29458 generic.go:334] "Generic (PLEG): container finished" podID="7e4fb17aa6f4ce82697c1badb6e3e623" containerID="f3f780418e0dc78b1593ce2cd94d46df24ecbd7393affbd8ab7521d75f83183d" exitCode=0 Mar 08 22:20:14.516192 master-0 kubenswrapper[29458]: I0308 22:20:14.516104 29458 generic.go:334] "Generic (PLEG): container finished" podID="7e4fb17aa6f4ce82697c1badb6e3e623" containerID="713d5bb870be4b517e2a3b6934cbc3a8dbb4fb996bc551e64dbb0c038eff7f98" exitCode=0 Mar 08 22:20:14.516192 master-0 kubenswrapper[29458]: I0308 22:20:14.516135 29458 generic.go:334] "Generic (PLEG): container finished" podID="7e4fb17aa6f4ce82697c1badb6e3e623" containerID="15c38815310dffefa782d7e3b86b468eadf91008125f12d833ccabdf6a47990b" exitCode=2 Mar 08 22:20:14.516192 master-0 kubenswrapper[29458]: I0308 22:20:14.516161 29458 generic.go:334] "Generic (PLEG): container finished" podID="7e4fb17aa6f4ce82697c1badb6e3e623" containerID="24468252b1016ecbfc6fabcc842f03b85cc1d8d62ad0492983e2d43991a2cb4a" exitCode=0 Mar 08 22:20:14.516502 master-0 kubenswrapper[29458]: I0308 22:20:14.516251 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:14.516502 master-0 kubenswrapper[29458]: I0308 22:20:14.516300 29458 scope.go:117] "RemoveContainer" containerID="045d96fc5260120205fd3f9cca2039678cbcc24c6c931c6bbf3f1ba418756e6c" Mar 08 22:20:14.516502 master-0 kubenswrapper[29458]: I0308 22:20:14.516268 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270111bd9a880fa859abff7a300a5a42546d0f86314f375208a892a811a648e7" Mar 08 22:20:14.549456 master-0 kubenswrapper[29458]: I0308 22:20:14.546821 29458 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7e4fb17aa6f4ce82697c1badb6e3e623" podUID="b95a77eed40019e8cface8c31482bb18" Mar 08 22:20:14.558773 master-0 kubenswrapper[29458]: I0308 22:20:14.558656 29458 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7e4fb17aa6f4ce82697c1badb6e3e623" podUID="b95a77eed40019e8cface8c31482bb18" Mar 08 22:20:14.988958 master-0 kubenswrapper[29458]: I0308 22:20:14.988879 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4fb17aa6f4ce82697c1badb6e3e623" path="/var/lib/kubelet/pods/7e4fb17aa6f4ce82697c1badb6e3e623/volumes" Mar 08 22:20:15.532018 master-0 kubenswrapper[29458]: I0308 22:20:15.531924 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7e4fb17aa6f4ce82697c1badb6e3e623/kube-controller-manager-cert-syncer/0.log" Mar 08 22:20:15.957705 master-0 kubenswrapper[29458]: I0308 22:20:15.957629 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:20:16.107128 master-0 kubenswrapper[29458]: I0308 22:20:16.106975 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kubelet-dir\") pod \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " Mar 08 22:20:16.108159 master-0 kubenswrapper[29458]: I0308 22:20:16.107180 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kube-api-access\") pod \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " Mar 08 22:20:16.108159 master-0 kubenswrapper[29458]: I0308 22:20:16.107199 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" (UID: "8c03808c-6ee1-4d20-8fc2-ef0100b43ee9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:20:16.108159 master-0 kubenswrapper[29458]: I0308 22:20:16.107232 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-var-lock\") pod \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\" (UID: \"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9\") " Mar 08 22:20:16.108159 master-0 kubenswrapper[29458]: I0308 22:20:16.107299 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-var-lock" (OuterVolumeSpecName: "var-lock") pod "8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" (UID: "8c03808c-6ee1-4d20-8fc2-ef0100b43ee9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 22:20:16.108777 master-0 kubenswrapper[29458]: I0308 22:20:16.108681 29458 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:16.108777 master-0 kubenswrapper[29458]: I0308 22:20:16.108735 29458 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:16.112600 master-0 kubenswrapper[29458]: I0308 22:20:16.112520 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" (UID: "8c03808c-6ee1-4d20-8fc2-ef0100b43ee9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:20:16.211616 master-0 kubenswrapper[29458]: I0308 22:20:16.211347 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c03808c-6ee1-4d20-8fc2-ef0100b43ee9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:16.546566 master-0 kubenswrapper[29458]: I0308 22:20:16.546310 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"8c03808c-6ee1-4d20-8fc2-ef0100b43ee9","Type":"ContainerDied","Data":"63e583f194f278a72f9d8b0f95d69a7dda35ce6ebe9778e45dc37b0408a29b7d"} Mar 08 22:20:16.546566 master-0 kubenswrapper[29458]: I0308 22:20:16.546394 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63e583f194f278a72f9d8b0f95d69a7dda35ce6ebe9778e45dc37b0408a29b7d" Mar 08 22:20:16.546566 master-0 kubenswrapper[29458]: I0308 22:20:16.546409 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 22:20:28.972683 master-0 kubenswrapper[29458]: I0308 22:20:28.972509 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:28.991767 master-0 kubenswrapper[29458]: I0308 22:20:28.991680 29458 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="eef39c81-b6c0-4357-a68b-47e55b2ae3fa" Mar 08 22:20:28.991767 master-0 kubenswrapper[29458]: I0308 22:20:28.991769 29458 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="eef39c81-b6c0-4357-a68b-47e55b2ae3fa" Mar 08 22:20:29.016279 master-0 kubenswrapper[29458]: I0308 22:20:29.011573 29458 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:29.016279 master-0 kubenswrapper[29458]: I0308 22:20:29.014883 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:20:29.025956 master-0 kubenswrapper[29458]: I0308 22:20:29.025879 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:20:29.038731 master-0 kubenswrapper[29458]: I0308 22:20:29.038679 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:29.046804 master-0 kubenswrapper[29458]: I0308 22:20:29.046718 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 22:20:29.683973 master-0 kubenswrapper[29458]: I0308 22:20:29.683881 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerStarted","Data":"3c541f6541fc7d2a05a03f7c625ed04599dc8d171455f85e11809c2f53546303"} Mar 08 22:20:29.684134 master-0 kubenswrapper[29458]: I0308 22:20:29.683988 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerStarted","Data":"1cebc880377228391a7fe8b33d314a27c6b2a52387d4611e27f464aecba14b73"} Mar 08 22:20:30.693732 master-0 kubenswrapper[29458]: I0308 22:20:30.693618 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerStarted","Data":"b019a6ad965db2d3c2b553f0e4a5318796868053c2c4cf567c226d68f50046cc"} Mar 08 22:20:30.693732 master-0 kubenswrapper[29458]: I0308 22:20:30.693683 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerStarted","Data":"5138495b4357923d0c8b7777d190b3dfe3cc018ae09136af7dbac61152c8e847"} Mar 08 22:20:30.693732 master-0 kubenswrapper[29458]: I0308 22:20:30.693694 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerStarted","Data":"8a710d3bcfc0f7df3506ea1caa74c22d9ad8600fecaf22eacb72b439f47e1765"} Mar 08 22:20:39.040512 master-0 kubenswrapper[29458]: I0308 22:20:39.039997 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:39.040512 master-0 kubenswrapper[29458]: I0308 22:20:39.040136 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:39.040512 master-0 kubenswrapper[29458]: I0308 22:20:39.040161 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:39.040512 master-0 kubenswrapper[29458]: I0308 22:20:39.040178 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:39.044646 master-0 kubenswrapper[29458]: I0308 22:20:39.040565 29458 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 08 22:20:39.044646 master-0 kubenswrapper[29458]: I0308 22:20:39.040712 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b95a77eed40019e8cface8c31482bb18" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 22:20:39.050946 master-0 kubenswrapper[29458]: I0308 22:20:39.050861 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:39.111182 master-0 kubenswrapper[29458]: I0308 22:20:39.110030 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=10.109994086 podStartE2EDuration="10.109994086s" podCreationTimestamp="2026-03-08 22:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:20:30.730577264 +0000 UTC m=+400.018634846" watchObservedRunningTime="2026-03-08 22:20:39.109994086 +0000 UTC m=+408.398051708" Mar 08 22:20:39.796746 master-0 kubenswrapper[29458]: I0308 22:20:39.796703 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:45.857235 master-0 kubenswrapper[29458]: I0308 22:20:45.855151 29458 generic.go:334] "Generic (PLEG): container finished" podID="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" containerID="43a9d4a149475717fa1ef3d37fbaab396886033829072b529898dcdefcf58e78" exitCode=0 Mar 08 22:20:45.857235 master-0 kubenswrapper[29458]: I0308 22:20:45.855283 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" event={"ID":"d589bfbb-3a7d-4617-9770-5c9ef737cb4a","Type":"ContainerDied","Data":"43a9d4a149475717fa1ef3d37fbaab396886033829072b529898dcdefcf58e78"} Mar 08 22:20:46.014710 master-0 kubenswrapper[29458]: I0308 22:20:46.014640 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:20:46.051046 master-0 kubenswrapper[29458]: I0308 22:20:46.050961 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.051046 master-0 kubenswrapper[29458]: I0308 22:20:46.051078 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.051407 master-0 kubenswrapper[29458]: I0308 22:20:46.051220 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.051407 master-0 kubenswrapper[29458]: I0308 22:20:46.051309 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.051407 master-0 kubenswrapper[29458]: I0308 22:20:46.051343 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.051407 master-0 kubenswrapper[29458]: I0308 22:20:46.051387 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.051560 master-0 kubenswrapper[29458]: I0308 22:20:46.051482 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") pod \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\" (UID: \"d589bfbb-3a7d-4617-9770-5c9ef737cb4a\") " Mar 08 22:20:46.052124 master-0 kubenswrapper[29458]: I0308 22:20:46.052032 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log" (OuterVolumeSpecName: "audit-log") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:20:46.053059 master-0 kubenswrapper[29458]: I0308 22:20:46.052985 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:20:46.053835 master-0 kubenswrapper[29458]: I0308 22:20:46.053768 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:20:46.056837 master-0 kubenswrapper[29458]: I0308 22:20:46.056790 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:20:46.057046 master-0 kubenswrapper[29458]: I0308 22:20:46.056972 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:20:46.059081 master-0 kubenswrapper[29458]: I0308 22:20:46.059023 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d" (OuterVolumeSpecName: "kube-api-access-9l82d") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "kube-api-access-9l82d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:20:46.062757 master-0 kubenswrapper[29458]: I0308 22:20:46.061512 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "d589bfbb-3a7d-4617-9770-5c9ef737cb4a" (UID: "d589bfbb-3a7d-4617-9770-5c9ef737cb4a"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154086 29458 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154197 29458 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154218 29458 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154248 29458 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154270 29458 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154290 29458 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.154332 master-0 kubenswrapper[29458]: I0308 22:20:46.154310 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l82d\" (UniqueName: \"kubernetes.io/projected/d589bfbb-3a7d-4617-9770-5c9ef737cb4a-kube-api-access-9l82d\") on node \"master-0\" DevicePath \"\"" Mar 08 22:20:46.873459 master-0 kubenswrapper[29458]: I0308 22:20:46.873361 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" event={"ID":"d589bfbb-3a7d-4617-9770-5c9ef737cb4a","Type":"ContainerDied","Data":"da21a3ee43c3a1cb17c48c1a6eb142ca7aa097c1d4b093b742853ab9c1146ede"} Mar 08 22:20:46.874273 master-0 kubenswrapper[29458]: I0308 22:20:46.873456 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5876b8d7-2222x" Mar 08 22:20:46.874273 master-0 kubenswrapper[29458]: I0308 22:20:46.873476 29458 scope.go:117] "RemoveContainer" containerID="43a9d4a149475717fa1ef3d37fbaab396886033829072b529898dcdefcf58e78" Mar 08 22:20:46.936457 master-0 kubenswrapper[29458]: I0308 22:20:46.936340 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-f5876b8d7-2222x"] Mar 08 22:20:46.946036 master-0 kubenswrapper[29458]: I0308 22:20:46.945954 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-f5876b8d7-2222x"] Mar 08 22:20:46.987619 master-0 kubenswrapper[29458]: I0308 22:20:46.987487 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" path="/var/lib/kubelet/pods/d589bfbb-3a7d-4617-9770-5c9ef737cb4a/volumes" Mar 08 22:20:49.041843 master-0 kubenswrapper[29458]: I0308 22:20:49.041371 29458 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 08 22:20:49.041843 master-0 kubenswrapper[29458]: I0308 22:20:49.041502 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b95a77eed40019e8cface8c31482bb18" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 22:20:51.376108 master-0 kubenswrapper[29458]: I0308 22:20:51.376042 29458 scope.go:117] "RemoveContainer" containerID="15c38815310dffefa782d7e3b86b468eadf91008125f12d833ccabdf6a47990b" Mar 08 22:20:51.406766 master-0 kubenswrapper[29458]: I0308 22:20:51.406728 29458 scope.go:117] "RemoveContainer" containerID="713d5bb870be4b517e2a3b6934cbc3a8dbb4fb996bc551e64dbb0c038eff7f98" Mar 08 22:20:51.431404 master-0 kubenswrapper[29458]: I0308 22:20:51.431346 29458 scope.go:117] "RemoveContainer" containerID="24468252b1016ecbfc6fabcc842f03b85cc1d8d62ad0492983e2d43991a2cb4a" Mar 08 22:20:59.041323 master-0 kubenswrapper[29458]: I0308 22:20:59.041208 29458 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 08 22:20:59.044061 master-0 kubenswrapper[29458]: I0308 22:20:59.041329 29458 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b95a77eed40019e8cface8c31482bb18" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 22:20:59.044061 master-0 kubenswrapper[29458]: I0308 22:20:59.041429 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:20:59.044061 master-0 kubenswrapper[29458]: I0308 22:20:59.042491 29458 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"3c541f6541fc7d2a05a03f7c625ed04599dc8d171455f85e11809c2f53546303"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 08 22:20:59.044061 master-0 kubenswrapper[29458]: I0308 22:20:59.042748 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b95a77eed40019e8cface8c31482bb18" containerName="kube-controller-manager" containerID="cri-o://3c541f6541fc7d2a05a03f7c625ed04599dc8d171455f85e11809c2f53546303" gracePeriod=30 Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.925489 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4"] Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: E0308 22:21:26.925934 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" containerName="installer" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.925949 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" containerName="installer" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: E0308 22:21:26.925997 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" containerName="metrics-server" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.926004 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" containerName="metrics-server" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.926167 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c03808c-6ee1-4d20-8fc2-ef0100b43ee9" containerName="installer" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.926227 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="d589bfbb-3a7d-4617-9770-5c9ef737cb4a" containerName="metrics-server" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.927283 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:26.934782 master-0 kubenswrapper[29458]: I0308 22:21:26.932591 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-26ndz" Mar 08 22:21:26.936387 master-0 kubenswrapper[29458]: I0308 22:21:26.936148 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4"] Mar 08 22:21:26.994596 master-0 kubenswrapper[29458]: I0308 22:21:26.994520 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d5f8cb68d-7n2g4"] Mar 08 22:21:26.998798 master-0 kubenswrapper[29458]: I0308 22:21:26.996189 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.004204 master-0 kubenswrapper[29458]: I0308 22:21:27.003521 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.004204 master-0 kubenswrapper[29458]: I0308 22:21:27.003583 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2r2\" (UniqueName: \"kubernetes.io/projected/a38b4833-5b1e-4127-a31c-43d1b154b9f5-kube-api-access-tn2r2\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.004204 master-0 kubenswrapper[29458]: I0308 22:21:27.003641 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.023910 master-0 kubenswrapper[29458]: I0308 22:21:27.022837 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d5f8cb68d-7n2g4"] Mar 08 22:21:27.105579 master-0 kubenswrapper[29458]: I0308 22:21:27.105486 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-oauth-serving-cert\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.105579 master-0 kubenswrapper[29458]: I0308 22:21:27.105574 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-oauth-config\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.105912 master-0 kubenswrapper[29458]: I0308 22:21:27.105615 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-service-ca\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.105912 master-0 kubenswrapper[29458]: I0308 22:21:27.105874 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-serving-cert\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.106000 master-0 kubenswrapper[29458]: I0308 22:21:27.105929 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.106000 master-0 kubenswrapper[29458]: I0308 22:21:27.105969 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2r2\" (UniqueName: \"kubernetes.io/projected/a38b4833-5b1e-4127-a31c-43d1b154b9f5-kube-api-access-tn2r2\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.106106 master-0 kubenswrapper[29458]: I0308 22:21:27.106008 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-console-config\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.106106 master-0 kubenswrapper[29458]: I0308 22:21:27.106098 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-trusted-ca-bundle\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.106173 master-0 kubenswrapper[29458]: I0308 22:21:27.106154 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.106208 master-0 kubenswrapper[29458]: I0308 22:21:27.106185 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2msk\" (UniqueName: \"kubernetes.io/projected/81607a56-08b4-4113-94bb-d6065b7809d5-kube-api-access-j2msk\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.106802 master-0 kubenswrapper[29458]: I0308 22:21:27.106778 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.108220 master-0 kubenswrapper[29458]: I0308 22:21:27.106888 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.122852 master-0 kubenswrapper[29458]: I0308 22:21:27.122779 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2r2\" (UniqueName: \"kubernetes.io/projected/a38b4833-5b1e-4127-a31c-43d1b154b9f5-kube-api-access-tn2r2\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.207942 master-0 kubenswrapper[29458]: I0308 22:21:27.207762 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-serving-cert\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.207942 master-0 kubenswrapper[29458]: I0308 22:21:27.207881 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-console-config\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.208299 master-0 kubenswrapper[29458]: I0308 22:21:27.207951 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-trusted-ca-bundle\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.208299 master-0 kubenswrapper[29458]: I0308 22:21:27.208019 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2msk\" (UniqueName: \"kubernetes.io/projected/81607a56-08b4-4113-94bb-d6065b7809d5-kube-api-access-j2msk\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.208299 master-0 kubenswrapper[29458]: I0308 22:21:27.208142 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-oauth-serving-cert\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.208299 master-0 kubenswrapper[29458]: I0308 22:21:27.208192 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-oauth-config\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.208299 master-0 kubenswrapper[29458]: I0308 22:21:27.208245 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-service-ca\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.209296 master-0 kubenswrapper[29458]: I0308 22:21:27.209233 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-console-config\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.210151 master-0 kubenswrapper[29458]: I0308 22:21:27.210114 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-oauth-serving-cert\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.210458 master-0 kubenswrapper[29458]: I0308 22:21:27.210418 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-service-ca\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.212315 master-0 kubenswrapper[29458]: I0308 22:21:27.212238 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-trusted-ca-bundle\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.213451 master-0 kubenswrapper[29458]: I0308 22:21:27.213008 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-serving-cert\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.213580 master-0 kubenswrapper[29458]: I0308 22:21:27.213538 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-oauth-config\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.237526 master-0 kubenswrapper[29458]: I0308 22:21:27.237470 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2msk\" (UniqueName: \"kubernetes.io/projected/81607a56-08b4-4113-94bb-d6065b7809d5-kube-api-access-j2msk\") pod \"console-5d5f8cb68d-7n2g4\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.260436 master-0 kubenswrapper[29458]: I0308 22:21:27.260340 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:27.365940 master-0 kubenswrapper[29458]: I0308 22:21:27.365849 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:27.829886 master-0 kubenswrapper[29458]: I0308 22:21:27.829784 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4"] Mar 08 22:21:27.831118 master-0 kubenswrapper[29458]: W0308 22:21:27.830160 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda38b4833_5b1e_4127_a31c_43d1b154b9f5.slice/crio-e60c8f171ec7f7d8d2bbb8c282a7fa17da5417850afa0e1aff7942666c416bd0 WatchSource:0}: Error finding container e60c8f171ec7f7d8d2bbb8c282a7fa17da5417850afa0e1aff7942666c416bd0: Status 404 returned error can't find the container with id e60c8f171ec7f7d8d2bbb8c282a7fa17da5417850afa0e1aff7942666c416bd0 Mar 08 22:21:27.925282 master-0 kubenswrapper[29458]: I0308 22:21:27.925220 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d5f8cb68d-7n2g4"] Mar 08 22:21:28.344907 master-0 kubenswrapper[29458]: I0308 22:21:28.344809 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5f8cb68d-7n2g4" event={"ID":"81607a56-08b4-4113-94bb-d6065b7809d5","Type":"ContainerStarted","Data":"ba058a49db2cc8fa08b4f5d3c89f5bc1b63aab7171686ec8cdd490108fb2a5ea"} Mar 08 22:21:28.344907 master-0 kubenswrapper[29458]: I0308 22:21:28.344902 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5f8cb68d-7n2g4" event={"ID":"81607a56-08b4-4113-94bb-d6065b7809d5","Type":"ContainerStarted","Data":"77b4b754119442b06ec45e32c1dde13e65debe6d780de51e17716ebf552e1b5e"} Mar 08 22:21:28.348696 master-0 kubenswrapper[29458]: I0308 22:21:28.348624 29458 generic.go:334] "Generic (PLEG): container finished" podID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerID="1081aa614f78582b4c1f067674d1aa5899663050a35aad411462c71c4eada9fd" exitCode=0 Mar 08 22:21:28.348800 master-0 kubenswrapper[29458]: I0308 22:21:28.348707 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" event={"ID":"a38b4833-5b1e-4127-a31c-43d1b154b9f5","Type":"ContainerDied","Data":"1081aa614f78582b4c1f067674d1aa5899663050a35aad411462c71c4eada9fd"} Mar 08 22:21:28.348800 master-0 kubenswrapper[29458]: I0308 22:21:28.348750 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" event={"ID":"a38b4833-5b1e-4127-a31c-43d1b154b9f5","Type":"ContainerStarted","Data":"e60c8f171ec7f7d8d2bbb8c282a7fa17da5417850afa0e1aff7942666c416bd0"} Mar 08 22:21:28.350141 master-0 kubenswrapper[29458]: I0308 22:21:28.350032 29458 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 22:21:28.367178 master-0 kubenswrapper[29458]: I0308 22:21:28.367048 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d5f8cb68d-7n2g4" podStartSLOduration=2.3667224190000002 podStartE2EDuration="2.366722419s" podCreationTimestamp="2026-03-08 22:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:21:28.364724556 +0000 UTC m=+457.652782148" watchObservedRunningTime="2026-03-08 22:21:28.366722419 +0000 UTC m=+457.654780011" Mar 08 22:21:29.366692 master-0 kubenswrapper[29458]: I0308 22:21:29.366597 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b95a77eed40019e8cface8c31482bb18/kube-controller-manager/0.log" Mar 08 22:21:29.367799 master-0 kubenswrapper[29458]: I0308 22:21:29.366703 29458 generic.go:334] "Generic (PLEG): container finished" podID="b95a77eed40019e8cface8c31482bb18" containerID="3c541f6541fc7d2a05a03f7c625ed04599dc8d171455f85e11809c2f53546303" exitCode=137 Mar 08 22:21:29.367799 master-0 kubenswrapper[29458]: I0308 22:21:29.366882 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerDied","Data":"3c541f6541fc7d2a05a03f7c625ed04599dc8d171455f85e11809c2f53546303"} Mar 08 22:21:30.390313 master-0 kubenswrapper[29458]: I0308 22:21:30.390202 29458 generic.go:334] "Generic (PLEG): container finished" podID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerID="07e8bece10b46d426fdb8e4c40c34f7f7683989e00d00de8da0572bd76c11601" exitCode=0 Mar 08 22:21:30.391338 master-0 kubenswrapper[29458]: I0308 22:21:30.390343 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" event={"ID":"a38b4833-5b1e-4127-a31c-43d1b154b9f5","Type":"ContainerDied","Data":"07e8bece10b46d426fdb8e4c40c34f7f7683989e00d00de8da0572bd76c11601"} Mar 08 22:21:30.402499 master-0 kubenswrapper[29458]: I0308 22:21:30.402421 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b95a77eed40019e8cface8c31482bb18/kube-controller-manager/0.log" Mar 08 22:21:30.402720 master-0 kubenswrapper[29458]: I0308 22:21:30.402530 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b95a77eed40019e8cface8c31482bb18","Type":"ContainerStarted","Data":"3a5d0e8e74ed5d8023dae6e75dff902c4fc34119284f5c6867998d086c63d0bb"} Mar 08 22:21:31.424660 master-0 kubenswrapper[29458]: I0308 22:21:31.424564 29458 generic.go:334] "Generic (PLEG): container finished" podID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerID="6fe22014b2207ae74a61d0e350e7610c21fb102e4226b228507ffdce92ea35a9" exitCode=0 Mar 08 22:21:31.425662 master-0 kubenswrapper[29458]: I0308 22:21:31.425236 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" event={"ID":"a38b4833-5b1e-4127-a31c-43d1b154b9f5","Type":"ContainerDied","Data":"6fe22014b2207ae74a61d0e350e7610c21fb102e4226b228507ffdce92ea35a9"} Mar 08 22:21:32.917699 master-0 kubenswrapper[29458]: I0308 22:21:32.917580 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:33.035593 master-0 kubenswrapper[29458]: I0308 22:21:33.035514 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-util\") pod \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " Mar 08 22:21:33.036123 master-0 kubenswrapper[29458]: I0308 22:21:33.036104 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-bundle\") pod \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " Mar 08 22:21:33.036361 master-0 kubenswrapper[29458]: I0308 22:21:33.036342 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2r2\" (UniqueName: \"kubernetes.io/projected/a38b4833-5b1e-4127-a31c-43d1b154b9f5-kube-api-access-tn2r2\") pod \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\" (UID: \"a38b4833-5b1e-4127-a31c-43d1b154b9f5\") " Mar 08 22:21:33.037625 master-0 kubenswrapper[29458]: I0308 22:21:33.037396 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-bundle" (OuterVolumeSpecName: "bundle") pod "a38b4833-5b1e-4127-a31c-43d1b154b9f5" (UID: "a38b4833-5b1e-4127-a31c-43d1b154b9f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:21:33.042005 master-0 kubenswrapper[29458]: I0308 22:21:33.041945 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38b4833-5b1e-4127-a31c-43d1b154b9f5-kube-api-access-tn2r2" (OuterVolumeSpecName: "kube-api-access-tn2r2") pod "a38b4833-5b1e-4127-a31c-43d1b154b9f5" (UID: "a38b4833-5b1e-4127-a31c-43d1b154b9f5"). InnerVolumeSpecName "kube-api-access-tn2r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:21:33.057775 master-0 kubenswrapper[29458]: I0308 22:21:33.057448 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-util" (OuterVolumeSpecName: "util") pod "a38b4833-5b1e-4127-a31c-43d1b154b9f5" (UID: "a38b4833-5b1e-4127-a31c-43d1b154b9f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:21:33.139220 master-0 kubenswrapper[29458]: I0308 22:21:33.138962 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2r2\" (UniqueName: \"kubernetes.io/projected/a38b4833-5b1e-4127-a31c-43d1b154b9f5-kube-api-access-tn2r2\") on node \"master-0\" DevicePath \"\"" Mar 08 22:21:33.139220 master-0 kubenswrapper[29458]: I0308 22:21:33.139024 29458 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-util\") on node \"master-0\" DevicePath \"\"" Mar 08 22:21:33.139220 master-0 kubenswrapper[29458]: I0308 22:21:33.139034 29458 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38b4833-5b1e-4127-a31c-43d1b154b9f5-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:21:33.449000 master-0 kubenswrapper[29458]: I0308 22:21:33.448825 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" event={"ID":"a38b4833-5b1e-4127-a31c-43d1b154b9f5","Type":"ContainerDied","Data":"e60c8f171ec7f7d8d2bbb8c282a7fa17da5417850afa0e1aff7942666c416bd0"} Mar 08 22:21:33.449000 master-0 kubenswrapper[29458]: I0308 22:21:33.448904 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e60c8f171ec7f7d8d2bbb8c282a7fa17da5417850afa0e1aff7942666c416bd0" Mar 08 22:21:33.449000 master-0 kubenswrapper[29458]: I0308 22:21:33.448910 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4zpmb4" Mar 08 22:21:37.366506 master-0 kubenswrapper[29458]: I0308 22:21:37.366380 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:37.366506 master-0 kubenswrapper[29458]: I0308 22:21:37.366481 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:37.372016 master-0 kubenswrapper[29458]: I0308 22:21:37.371953 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:37.511968 master-0 kubenswrapper[29458]: I0308 22:21:37.511890 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:21:39.040532 master-0 kubenswrapper[29458]: I0308 22:21:39.040446 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:21:39.044405 master-0 kubenswrapper[29458]: I0308 22:21:39.042129 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:21:39.048579 master-0 kubenswrapper[29458]: I0308 22:21:39.048530 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:21:39.533612 master-0 kubenswrapper[29458]: I0308 22:21:39.533516 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 22:21:48.767126 master-0 kubenswrapper[29458]: I0308 22:21:48.766085 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6994646879-wvkdk"] Mar 08 22:21:51.500056 master-0 kubenswrapper[29458]: I0308 22:21:51.499958 29458 scope.go:117] "RemoveContainer" containerID="680bc626daa2c5987ce239ac78852fa737cd8249340056e2004f1c4baeff289f" Mar 08 22:21:51.534699 master-0 kubenswrapper[29458]: I0308 22:21:51.534622 29458 scope.go:117] "RemoveContainer" containerID="f3f780418e0dc78b1593ce2cd94d46df24ecbd7393affbd8ab7521d75f83183d" Mar 08 22:21:56.761101 master-0 kubenswrapper[29458]: I0308 22:21:56.760761 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-677ff5c948-t4zdm"] Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: E0308 22:21:56.761158 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="util" Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: I0308 22:21:56.761171 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="util" Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: E0308 22:21:56.761189 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="extract" Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: I0308 22:21:56.761196 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="extract" Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: E0308 22:21:56.761209 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="pull" Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: I0308 22:21:56.761215 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="pull" Mar 08 22:21:56.761828 master-0 kubenswrapper[29458]: I0308 22:21:56.761351 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38b4833-5b1e-4127-a31c-43d1b154b9f5" containerName="extract" Mar 08 22:21:56.762044 master-0 kubenswrapper[29458]: I0308 22:21:56.761931 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:56.765127 master-0 kubenswrapper[29458]: I0308 22:21:56.764287 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 08 22:21:56.765127 master-0 kubenswrapper[29458]: I0308 22:21:56.764659 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 08 22:21:56.765127 master-0 kubenswrapper[29458]: I0308 22:21:56.764777 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 08 22:21:56.765127 master-0 kubenswrapper[29458]: I0308 22:21:56.764903 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 08 22:21:56.765127 master-0 kubenswrapper[29458]: I0308 22:21:56.764988 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 08 22:21:56.784648 master-0 kubenswrapper[29458]: I0308 22:21:56.784566 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-677ff5c948-t4zdm"] Mar 08 22:21:56.947726 master-0 kubenswrapper[29458]: I0308 22:21:56.947662 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5p22\" (UniqueName: \"kubernetes.io/projected/66bd6174-2bcf-4dfe-9379-624565c6d1d9-kube-api-access-c5p22\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:56.948460 master-0 kubenswrapper[29458]: I0308 22:21:56.948390 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-metrics-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:56.948539 master-0 kubenswrapper[29458]: I0308 22:21:56.948500 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-apiservice-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:56.948539 master-0 kubenswrapper[29458]: I0308 22:21:56.948526 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-webhook-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:56.948622 master-0 kubenswrapper[29458]: I0308 22:21:56.948567 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/66bd6174-2bcf-4dfe-9379-624565c6d1d9-socket-dir\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.049956 master-0 kubenswrapper[29458]: I0308 22:21:57.049794 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/66bd6174-2bcf-4dfe-9379-624565c6d1d9-socket-dir\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.049956 master-0 kubenswrapper[29458]: I0308 22:21:57.049877 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5p22\" (UniqueName: \"kubernetes.io/projected/66bd6174-2bcf-4dfe-9379-624565c6d1d9-kube-api-access-c5p22\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.049956 master-0 kubenswrapper[29458]: I0308 22:21:57.049949 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-metrics-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.050340 master-0 kubenswrapper[29458]: I0308 22:21:57.049980 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-apiservice-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.050340 master-0 kubenswrapper[29458]: I0308 22:21:57.049997 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-webhook-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.050440 master-0 kubenswrapper[29458]: I0308 22:21:57.050368 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/66bd6174-2bcf-4dfe-9379-624565c6d1d9-socket-dir\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.053757 master-0 kubenswrapper[29458]: I0308 22:21:57.053716 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-webhook-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.054684 master-0 kubenswrapper[29458]: I0308 22:21:57.054653 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-metrics-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.057780 master-0 kubenswrapper[29458]: I0308 22:21:57.057723 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66bd6174-2bcf-4dfe-9379-624565c6d1d9-apiservice-cert\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.067984 master-0 kubenswrapper[29458]: I0308 22:21:57.067936 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5p22\" (UniqueName: \"kubernetes.io/projected/66bd6174-2bcf-4dfe-9379-624565c6d1d9-kube-api-access-c5p22\") pod \"lvms-operator-677ff5c948-t4zdm\" (UID: \"66bd6174-2bcf-4dfe-9379-624565c6d1d9\") " pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.086792 master-0 kubenswrapper[29458]: I0308 22:21:57.086736 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:21:57.542786 master-0 kubenswrapper[29458]: I0308 22:21:57.540906 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-677ff5c948-t4zdm"] Mar 08 22:21:57.559016 master-0 kubenswrapper[29458]: W0308 22:21:57.558966 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66bd6174_2bcf_4dfe_9379_624565c6d1d9.slice/crio-9bc27b2740a252431d221ab42c33bf780c9fea39dc32e61b32ea7925766ab11f WatchSource:0}: Error finding container 9bc27b2740a252431d221ab42c33bf780c9fea39dc32e61b32ea7925766ab11f: Status 404 returned error can't find the container with id 9bc27b2740a252431d221ab42c33bf780c9fea39dc32e61b32ea7925766ab11f Mar 08 22:21:57.734059 master-0 kubenswrapper[29458]: I0308 22:21:57.733912 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" event={"ID":"66bd6174-2bcf-4dfe-9379-624565c6d1d9","Type":"ContainerStarted","Data":"9bc27b2740a252431d221ab42c33bf780c9fea39dc32e61b32ea7925766ab11f"} Mar 08 22:22:02.791095 master-0 kubenswrapper[29458]: I0308 22:22:02.790948 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" event={"ID":"66bd6174-2bcf-4dfe-9379-624565c6d1d9","Type":"ContainerStarted","Data":"48d7b12053328b18df41556993c830041c1623daa3bf2abb14cbc01bd80d1619"} Mar 08 22:22:02.832348 master-0 kubenswrapper[29458]: I0308 22:22:02.832108 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" podStartSLOduration=2.399132436 podStartE2EDuration="6.832033411s" podCreationTimestamp="2026-03-08 22:21:56 +0000 UTC" firstStartedPulling="2026-03-08 22:21:57.563818287 +0000 UTC m=+486.851875879" lastFinishedPulling="2026-03-08 22:22:01.996719262 +0000 UTC m=+491.284776854" observedRunningTime="2026-03-08 22:22:02.825550414 +0000 UTC m=+492.113608016" watchObservedRunningTime="2026-03-08 22:22:02.832033411 +0000 UTC m=+492.120091043" Mar 08 22:22:03.804109 master-0 kubenswrapper[29458]: I0308 22:22:03.803081 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:22:03.819068 master-0 kubenswrapper[29458]: I0308 22:22:03.818959 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-677ff5c948-t4zdm" Mar 08 22:22:07.623287 master-0 kubenswrapper[29458]: I0308 22:22:07.623181 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss"] Mar 08 22:22:07.626239 master-0 kubenswrapper[29458]: I0308 22:22:07.625580 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.629026 master-0 kubenswrapper[29458]: I0308 22:22:07.628967 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-26ndz" Mar 08 22:22:07.641396 master-0 kubenswrapper[29458]: I0308 22:22:07.641348 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss"] Mar 08 22:22:07.757657 master-0 kubenswrapper[29458]: I0308 22:22:07.757549 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.757657 master-0 kubenswrapper[29458]: I0308 22:22:07.757655 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mx8\" (UniqueName: \"kubernetes.io/projected/f0e81c1b-ad7f-44bb-ac49-856c38df992f-kube-api-access-z5mx8\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.757987 master-0 kubenswrapper[29458]: I0308 22:22:07.757817 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.859316 master-0 kubenswrapper[29458]: I0308 22:22:07.859240 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.859636 master-0 kubenswrapper[29458]: I0308 22:22:07.859546 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mx8\" (UniqueName: \"kubernetes.io/projected/f0e81c1b-ad7f-44bb-ac49-856c38df992f-kube-api-access-z5mx8\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.860257 master-0 kubenswrapper[29458]: I0308 22:22:07.859920 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.860257 master-0 kubenswrapper[29458]: I0308 22:22:07.860026 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.861032 master-0 kubenswrapper[29458]: I0308 22:22:07.860937 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.881778 master-0 kubenswrapper[29458]: I0308 22:22:07.881628 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mx8\" (UniqueName: \"kubernetes.io/projected/f0e81c1b-ad7f-44bb-ac49-856c38df992f-kube-api-access-z5mx8\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:07.951203 master-0 kubenswrapper[29458]: I0308 22:22:07.951114 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:08.401701 master-0 kubenswrapper[29458]: I0308 22:22:08.401634 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss"] Mar 08 22:22:08.406348 master-0 kubenswrapper[29458]: W0308 22:22:08.406305 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e81c1b_ad7f_44bb_ac49_856c38df992f.slice/crio-147d4375cb4f19d6d40e6c458417dbafbdcff4d732b58fca087588baf420669f WatchSource:0}: Error finding container 147d4375cb4f19d6d40e6c458417dbafbdcff4d732b58fca087588baf420669f: Status 404 returned error can't find the container with id 147d4375cb4f19d6d40e6c458417dbafbdcff4d732b58fca087588baf420669f Mar 08 22:22:08.771814 master-0 kubenswrapper[29458]: I0308 22:22:08.771730 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh"] Mar 08 22:22:08.790829 master-0 kubenswrapper[29458]: I0308 22:22:08.790734 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:08.805859 master-0 kubenswrapper[29458]: I0308 22:22:08.805534 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh"] Mar 08 22:22:08.850176 master-0 kubenswrapper[29458]: I0308 22:22:08.850060 29458 generic.go:334] "Generic (PLEG): container finished" podID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerID="52d2200d54545a229a19a546635aa695bea15bba48c6895ced18cd286f5404ac" exitCode=0 Mar 08 22:22:08.850465 master-0 kubenswrapper[29458]: I0308 22:22:08.850159 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" event={"ID":"f0e81c1b-ad7f-44bb-ac49-856c38df992f","Type":"ContainerDied","Data":"52d2200d54545a229a19a546635aa695bea15bba48c6895ced18cd286f5404ac"} Mar 08 22:22:08.850465 master-0 kubenswrapper[29458]: I0308 22:22:08.850230 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" event={"ID":"f0e81c1b-ad7f-44bb-ac49-856c38df992f","Type":"ContainerStarted","Data":"147d4375cb4f19d6d40e6c458417dbafbdcff4d732b58fca087588baf420669f"} Mar 08 22:22:08.982385 master-0 kubenswrapper[29458]: I0308 22:22:08.982107 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/603593aa-5382-4de8-9305-baef3f2e1914-kube-api-access-t4blb\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:08.982385 master-0 kubenswrapper[29458]: I0308 22:22:08.982165 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:08.982807 master-0 kubenswrapper[29458]: I0308 22:22:08.982418 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.084567 master-0 kubenswrapper[29458]: I0308 22:22:09.084395 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/603593aa-5382-4de8-9305-baef3f2e1914-kube-api-access-t4blb\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.085000 master-0 kubenswrapper[29458]: I0308 22:22:09.084705 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.085000 master-0 kubenswrapper[29458]: I0308 22:22:09.084860 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.085658 master-0 kubenswrapper[29458]: I0308 22:22:09.085577 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.086001 master-0 kubenswrapper[29458]: I0308 22:22:09.085918 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.108718 master-0 kubenswrapper[29458]: I0308 22:22:09.108622 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/603593aa-5382-4de8-9305-baef3f2e1914-kube-api-access-t4blb\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.118655 master-0 kubenswrapper[29458]: I0308 22:22:09.118603 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:09.619347 master-0 kubenswrapper[29458]: I0308 22:22:09.619236 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh"] Mar 08 22:22:09.622555 master-0 kubenswrapper[29458]: I0308 22:22:09.622520 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8"] Mar 08 22:22:09.625494 master-0 kubenswrapper[29458]: I0308 22:22:09.625452 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.633289 master-0 kubenswrapper[29458]: I0308 22:22:09.633230 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8"] Mar 08 22:22:09.706291 master-0 kubenswrapper[29458]: I0308 22:22:09.705624 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.706291 master-0 kubenswrapper[29458]: I0308 22:22:09.705901 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.706291 master-0 kubenswrapper[29458]: I0308 22:22:09.706245 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfp4h\" (UniqueName: \"kubernetes.io/projected/1d532dd3-ce06-416d-98ba-244d534c2225-kube-api-access-jfp4h\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.808218 master-0 kubenswrapper[29458]: I0308 22:22:09.808057 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.808218 master-0 kubenswrapper[29458]: I0308 22:22:09.808182 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp4h\" (UniqueName: \"kubernetes.io/projected/1d532dd3-ce06-416d-98ba-244d534c2225-kube-api-access-jfp4h\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.809218 master-0 kubenswrapper[29458]: I0308 22:22:09.808259 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.809218 master-0 kubenswrapper[29458]: I0308 22:22:09.808738 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.809218 master-0 kubenswrapper[29458]: I0308 22:22:09.808824 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.825355 master-0 kubenswrapper[29458]: I0308 22:22:09.825270 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfp4h\" (UniqueName: \"kubernetes.io/projected/1d532dd3-ce06-416d-98ba-244d534c2225-kube-api-access-jfp4h\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:09.860819 master-0 kubenswrapper[29458]: I0308 22:22:09.860649 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" event={"ID":"603593aa-5382-4de8-9305-baef3f2e1914","Type":"ContainerStarted","Data":"736f0846e01a8a68475655aaac069b729f0dcb247ade91218649558d6d9f4664"} Mar 08 22:22:09.860819 master-0 kubenswrapper[29458]: I0308 22:22:09.860731 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" event={"ID":"603593aa-5382-4de8-9305-baef3f2e1914","Type":"ContainerStarted","Data":"34588f282b7a792048e1027c24f11bf16423cae552fb3e35ef45a0ec8dfdf237"} Mar 08 22:22:09.952729 master-0 kubenswrapper[29458]: I0308 22:22:09.952680 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:10.408590 master-0 kubenswrapper[29458]: I0308 22:22:10.408511 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8"] Mar 08 22:22:10.868862 master-0 kubenswrapper[29458]: I0308 22:22:10.868788 29458 generic.go:334] "Generic (PLEG): container finished" podID="603593aa-5382-4de8-9305-baef3f2e1914" containerID="736f0846e01a8a68475655aaac069b729f0dcb247ade91218649558d6d9f4664" exitCode=0 Mar 08 22:22:10.869511 master-0 kubenswrapper[29458]: I0308 22:22:10.868850 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" event={"ID":"603593aa-5382-4de8-9305-baef3f2e1914","Type":"ContainerDied","Data":"736f0846e01a8a68475655aaac069b729f0dcb247ade91218649558d6d9f4664"} Mar 08 22:22:11.463042 master-0 kubenswrapper[29458]: W0308 22:22:11.462877 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d532dd3_ce06_416d_98ba_244d534c2225.slice/crio-b8f9d371573a2ba2bde2e4d44b1565e675b87d43e8812bfe7f8fd1ce86278813 WatchSource:0}: Error finding container b8f9d371573a2ba2bde2e4d44b1565e675b87d43e8812bfe7f8fd1ce86278813: Status 404 returned error can't find the container with id b8f9d371573a2ba2bde2e4d44b1565e675b87d43e8812bfe7f8fd1ce86278813 Mar 08 22:22:11.879847 master-0 kubenswrapper[29458]: I0308 22:22:11.879730 29458 generic.go:334] "Generic (PLEG): container finished" podID="1d532dd3-ce06-416d-98ba-244d534c2225" containerID="0ab36101dd9faf838f0caa94b3058186df58918dbb16255439251159861b60fb" exitCode=0 Mar 08 22:22:11.879847 master-0 kubenswrapper[29458]: I0308 22:22:11.879800 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" event={"ID":"1d532dd3-ce06-416d-98ba-244d534c2225","Type":"ContainerDied","Data":"0ab36101dd9faf838f0caa94b3058186df58918dbb16255439251159861b60fb"} Mar 08 22:22:11.880500 master-0 kubenswrapper[29458]: I0308 22:22:11.879870 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" event={"ID":"1d532dd3-ce06-416d-98ba-244d534c2225","Type":"ContainerStarted","Data":"b8f9d371573a2ba2bde2e4d44b1565e675b87d43e8812bfe7f8fd1ce86278813"} Mar 08 22:22:11.884232 master-0 kubenswrapper[29458]: I0308 22:22:11.884183 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" event={"ID":"f0e81c1b-ad7f-44bb-ac49-856c38df992f","Type":"ContainerStarted","Data":"c29f78c78e8f9d002d61ab2e1f9814a9308cf556c5b71792bf8d7a70096e7e5a"} Mar 08 22:22:12.898433 master-0 kubenswrapper[29458]: I0308 22:22:12.898363 29458 generic.go:334] "Generic (PLEG): container finished" podID="603593aa-5382-4de8-9305-baef3f2e1914" containerID="6531e265158e77f2227c119bef71ef8317220e2a67eda00bef17ea5d6fcef37b" exitCode=0 Mar 08 22:22:12.899185 master-0 kubenswrapper[29458]: I0308 22:22:12.898533 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" event={"ID":"603593aa-5382-4de8-9305-baef3f2e1914","Type":"ContainerDied","Data":"6531e265158e77f2227c119bef71ef8317220e2a67eda00bef17ea5d6fcef37b"} Mar 08 22:22:12.903271 master-0 kubenswrapper[29458]: I0308 22:22:12.902528 29458 generic.go:334] "Generic (PLEG): container finished" podID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerID="c29f78c78e8f9d002d61ab2e1f9814a9308cf556c5b71792bf8d7a70096e7e5a" exitCode=0 Mar 08 22:22:12.903271 master-0 kubenswrapper[29458]: I0308 22:22:12.902571 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" event={"ID":"f0e81c1b-ad7f-44bb-ac49-856c38df992f","Type":"ContainerDied","Data":"c29f78c78e8f9d002d61ab2e1f9814a9308cf556c5b71792bf8d7a70096e7e5a"} Mar 08 22:22:13.825178 master-0 kubenswrapper[29458]: I0308 22:22:13.825034 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6994646879-wvkdk" podUID="3110f839-30af-42b0-87a0-39ae9db0da4f" containerName="console" containerID="cri-o://2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313" gracePeriod=15 Mar 08 22:22:13.916026 master-0 kubenswrapper[29458]: I0308 22:22:13.915950 29458 generic.go:334] "Generic (PLEG): container finished" podID="603593aa-5382-4de8-9305-baef3f2e1914" containerID="0fc5618bdad5145ddbe46ddbfc326fb548addbdccfcba5e43fb38f5c7df239b5" exitCode=0 Mar 08 22:22:13.916768 master-0 kubenswrapper[29458]: I0308 22:22:13.916028 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" event={"ID":"603593aa-5382-4de8-9305-baef3f2e1914","Type":"ContainerDied","Data":"0fc5618bdad5145ddbe46ddbfc326fb548addbdccfcba5e43fb38f5c7df239b5"} Mar 08 22:22:13.919770 master-0 kubenswrapper[29458]: I0308 22:22:13.919730 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" event={"ID":"1d532dd3-ce06-416d-98ba-244d534c2225","Type":"ContainerDied","Data":"c5323556dd7b82f6f19dd172cf17fb37791deb0442a2c7a908e26fae71160650"} Mar 08 22:22:13.920946 master-0 kubenswrapper[29458]: I0308 22:22:13.919537 29458 generic.go:334] "Generic (PLEG): container finished" podID="1d532dd3-ce06-416d-98ba-244d534c2225" containerID="c5323556dd7b82f6f19dd172cf17fb37791deb0442a2c7a908e26fae71160650" exitCode=0 Mar 08 22:22:13.925744 master-0 kubenswrapper[29458]: I0308 22:22:13.925694 29458 generic.go:334] "Generic (PLEG): container finished" podID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerID="e3605ccd7e0558408e37a615b34a964283a20b1f68205ada11567ee203fe2738" exitCode=0 Mar 08 22:22:13.925824 master-0 kubenswrapper[29458]: I0308 22:22:13.925751 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" event={"ID":"f0e81c1b-ad7f-44bb-ac49-856c38df992f","Type":"ContainerDied","Data":"e3605ccd7e0558408e37a615b34a964283a20b1f68205ada11567ee203fe2738"} Mar 08 22:22:14.301299 master-0 kubenswrapper[29458]: I0308 22:22:14.301195 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6994646879-wvkdk_3110f839-30af-42b0-87a0-39ae9db0da4f/console/0.log" Mar 08 22:22:14.301483 master-0 kubenswrapper[29458]: I0308 22:22:14.301304 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:22:14.488698 master-0 kubenswrapper[29458]: I0308 22:22:14.488594 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-service-ca\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489055 master-0 kubenswrapper[29458]: I0308 22:22:14.488882 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-trusted-ca-bundle\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489055 master-0 kubenswrapper[29458]: I0308 22:22:14.488965 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-oauth-config\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489421 master-0 kubenswrapper[29458]: I0308 22:22:14.489064 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-serving-cert\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489421 master-0 kubenswrapper[29458]: I0308 22:22:14.489234 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-console-config\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489421 master-0 kubenswrapper[29458]: I0308 22:22:14.489318 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fj5t\" (UniqueName: \"kubernetes.io/projected/3110f839-30af-42b0-87a0-39ae9db0da4f-kube-api-access-6fj5t\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489421 master-0 kubenswrapper[29458]: I0308 22:22:14.489366 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-oauth-serving-cert\") pod \"3110f839-30af-42b0-87a0-39ae9db0da4f\" (UID: \"3110f839-30af-42b0-87a0-39ae9db0da4f\") " Mar 08 22:22:14.489782 master-0 kubenswrapper[29458]: I0308 22:22:14.489542 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:22:14.489782 master-0 kubenswrapper[29458]: I0308 22:22:14.489601 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-service-ca" (OuterVolumeSpecName: "service-ca") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:22:14.490030 master-0 kubenswrapper[29458]: I0308 22:22:14.489986 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-console-config" (OuterVolumeSpecName: "console-config") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:22:14.490219 master-0 kubenswrapper[29458]: I0308 22:22:14.490118 29458 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.490219 master-0 kubenswrapper[29458]: I0308 22:22:14.490158 29458 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.490690 master-0 kubenswrapper[29458]: I0308 22:22:14.490373 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:22:14.493655 master-0 kubenswrapper[29458]: I0308 22:22:14.493592 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3110f839-30af-42b0-87a0-39ae9db0da4f-kube-api-access-6fj5t" (OuterVolumeSpecName: "kube-api-access-6fj5t") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "kube-api-access-6fj5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:22:14.494601 master-0 kubenswrapper[29458]: I0308 22:22:14.494552 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:22:14.494898 master-0 kubenswrapper[29458]: I0308 22:22:14.494854 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3110f839-30af-42b0-87a0-39ae9db0da4f" (UID: "3110f839-30af-42b0-87a0-39ae9db0da4f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:22:14.590834 master-0 kubenswrapper[29458]: I0308 22:22:14.590758 29458 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.590834 master-0 kubenswrapper[29458]: I0308 22:22:14.590805 29458 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3110f839-30af-42b0-87a0-39ae9db0da4f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.590834 master-0 kubenswrapper[29458]: I0308 22:22:14.590819 29458 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.590834 master-0 kubenswrapper[29458]: I0308 22:22:14.590828 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fj5t\" (UniqueName: \"kubernetes.io/projected/3110f839-30af-42b0-87a0-39ae9db0da4f-kube-api-access-6fj5t\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.590834 master-0 kubenswrapper[29458]: I0308 22:22:14.590837 29458 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3110f839-30af-42b0-87a0-39ae9db0da4f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:14.940829 master-0 kubenswrapper[29458]: I0308 22:22:14.940621 29458 generic.go:334] "Generic (PLEG): container finished" podID="1d532dd3-ce06-416d-98ba-244d534c2225" containerID="e0a2345fbabe76e9cf7b0d7087c8bd0ae7076c01456e5cc4cb6be8032418c3b3" exitCode=0 Mar 08 22:22:14.940829 master-0 kubenswrapper[29458]: I0308 22:22:14.940714 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" event={"ID":"1d532dd3-ce06-416d-98ba-244d534c2225","Type":"ContainerDied","Data":"e0a2345fbabe76e9cf7b0d7087c8bd0ae7076c01456e5cc4cb6be8032418c3b3"} Mar 08 22:22:14.944792 master-0 kubenswrapper[29458]: I0308 22:22:14.944730 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6994646879-wvkdk_3110f839-30af-42b0-87a0-39ae9db0da4f/console/0.log" Mar 08 22:22:14.944957 master-0 kubenswrapper[29458]: I0308 22:22:14.944794 29458 generic.go:334] "Generic (PLEG): container finished" podID="3110f839-30af-42b0-87a0-39ae9db0da4f" containerID="2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313" exitCode=2 Mar 08 22:22:14.945037 master-0 kubenswrapper[29458]: I0308 22:22:14.944988 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6994646879-wvkdk" event={"ID":"3110f839-30af-42b0-87a0-39ae9db0da4f","Type":"ContainerDied","Data":"2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313"} Mar 08 22:22:14.945147 master-0 kubenswrapper[29458]: I0308 22:22:14.945059 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6994646879-wvkdk" event={"ID":"3110f839-30af-42b0-87a0-39ae9db0da4f","Type":"ContainerDied","Data":"c30e18c90882efd2573c741d34a2c032f25aed8f79fb244fc308acc028d2c8e2"} Mar 08 22:22:14.945222 master-0 kubenswrapper[29458]: I0308 22:22:14.945142 29458 scope.go:117] "RemoveContainer" containerID="2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313" Mar 08 22:22:14.945536 master-0 kubenswrapper[29458]: I0308 22:22:14.945474 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6994646879-wvkdk" Mar 08 22:22:14.987186 master-0 kubenswrapper[29458]: I0308 22:22:14.987108 29458 scope.go:117] "RemoveContainer" containerID="2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313" Mar 08 22:22:14.988243 master-0 kubenswrapper[29458]: E0308 22:22:14.988023 29458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313\": container with ID starting with 2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313 not found: ID does not exist" containerID="2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313" Mar 08 22:22:14.988243 master-0 kubenswrapper[29458]: I0308 22:22:14.988148 29458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313"} err="failed to get container status \"2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313\": rpc error: code = NotFound desc = could not find container \"2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313\": container with ID starting with 2eaa662799f6f85f916d6dfddee48d23732fea52621741a2d1be7c04c0e0d313 not found: ID does not exist" Mar 08 22:22:15.053745 master-0 kubenswrapper[29458]: I0308 22:22:15.053675 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6994646879-wvkdk"] Mar 08 22:22:15.062398 master-0 kubenswrapper[29458]: I0308 22:22:15.062308 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6994646879-wvkdk"] Mar 08 22:22:15.457577 master-0 kubenswrapper[29458]: I0308 22:22:15.457512 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:15.537255 master-0 kubenswrapper[29458]: I0308 22:22:15.537201 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:15.611427 master-0 kubenswrapper[29458]: I0308 22:22:15.611309 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mx8\" (UniqueName: \"kubernetes.io/projected/f0e81c1b-ad7f-44bb-ac49-856c38df992f-kube-api-access-z5mx8\") pod \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " Mar 08 22:22:15.611779 master-0 kubenswrapper[29458]: I0308 22:22:15.611541 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-bundle\") pod \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " Mar 08 22:22:15.611936 master-0 kubenswrapper[29458]: I0308 22:22:15.611865 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-util\") pod \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\" (UID: \"f0e81c1b-ad7f-44bb-ac49-856c38df992f\") " Mar 08 22:22:15.613673 master-0 kubenswrapper[29458]: I0308 22:22:15.613596 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-bundle" (OuterVolumeSpecName: "bundle") pod "f0e81c1b-ad7f-44bb-ac49-856c38df992f" (UID: "f0e81c1b-ad7f-44bb-ac49-856c38df992f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:15.615941 master-0 kubenswrapper[29458]: I0308 22:22:15.615854 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e81c1b-ad7f-44bb-ac49-856c38df992f-kube-api-access-z5mx8" (OuterVolumeSpecName: "kube-api-access-z5mx8") pod "f0e81c1b-ad7f-44bb-ac49-856c38df992f" (UID: "f0e81c1b-ad7f-44bb-ac49-856c38df992f"). InnerVolumeSpecName "kube-api-access-z5mx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:22:15.626911 master-0 kubenswrapper[29458]: I0308 22:22:15.626820 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-util" (OuterVolumeSpecName: "util") pod "f0e81c1b-ad7f-44bb-ac49-856c38df992f" (UID: "f0e81c1b-ad7f-44bb-ac49-856c38df992f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:15.714403 master-0 kubenswrapper[29458]: I0308 22:22:15.714309 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/603593aa-5382-4de8-9305-baef3f2e1914-kube-api-access-t4blb\") pod \"603593aa-5382-4de8-9305-baef3f2e1914\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " Mar 08 22:22:15.714752 master-0 kubenswrapper[29458]: I0308 22:22:15.714556 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-bundle\") pod \"603593aa-5382-4de8-9305-baef3f2e1914\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " Mar 08 22:22:15.714752 master-0 kubenswrapper[29458]: I0308 22:22:15.714707 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-util\") pod \"603593aa-5382-4de8-9305-baef3f2e1914\" (UID: \"603593aa-5382-4de8-9305-baef3f2e1914\") " Mar 08 22:22:15.715402 master-0 kubenswrapper[29458]: I0308 22:22:15.715350 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mx8\" (UniqueName: \"kubernetes.io/projected/f0e81c1b-ad7f-44bb-ac49-856c38df992f-kube-api-access-z5mx8\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:15.715402 master-0 kubenswrapper[29458]: I0308 22:22:15.715392 29458 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:15.715560 master-0 kubenswrapper[29458]: I0308 22:22:15.715413 29458 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0e81c1b-ad7f-44bb-ac49-856c38df992f-util\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:15.716942 master-0 kubenswrapper[29458]: I0308 22:22:15.716457 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-bundle" (OuterVolumeSpecName: "bundle") pod "603593aa-5382-4de8-9305-baef3f2e1914" (UID: "603593aa-5382-4de8-9305-baef3f2e1914"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:15.719224 master-0 kubenswrapper[29458]: I0308 22:22:15.719119 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603593aa-5382-4de8-9305-baef3f2e1914-kube-api-access-t4blb" (OuterVolumeSpecName: "kube-api-access-t4blb") pod "603593aa-5382-4de8-9305-baef3f2e1914" (UID: "603593aa-5382-4de8-9305-baef3f2e1914"). InnerVolumeSpecName "kube-api-access-t4blb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:22:15.737362 master-0 kubenswrapper[29458]: I0308 22:22:15.737243 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-util" (OuterVolumeSpecName: "util") pod "603593aa-5382-4de8-9305-baef3f2e1914" (UID: "603593aa-5382-4de8-9305-baef3f2e1914"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:15.817432 master-0 kubenswrapper[29458]: I0308 22:22:15.817318 29458 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-util\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:15.817432 master-0 kubenswrapper[29458]: I0308 22:22:15.817390 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/603593aa-5382-4de8-9305-baef3f2e1914-kube-api-access-t4blb\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:15.817432 master-0 kubenswrapper[29458]: I0308 22:22:15.817417 29458 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603593aa-5382-4de8-9305-baef3f2e1914-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:15.961919 master-0 kubenswrapper[29458]: I0308 22:22:15.961823 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" event={"ID":"f0e81c1b-ad7f-44bb-ac49-856c38df992f","Type":"ContainerDied","Data":"147d4375cb4f19d6d40e6c458417dbafbdcff4d732b58fca087588baf420669f"} Mar 08 22:22:15.961919 master-0 kubenswrapper[29458]: I0308 22:22:15.961904 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="147d4375cb4f19d6d40e6c458417dbafbdcff4d732b58fca087588baf420669f" Mar 08 22:22:15.961919 master-0 kubenswrapper[29458]: I0308 22:22:15.961907 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nzwss" Mar 08 22:22:15.965845 master-0 kubenswrapper[29458]: I0308 22:22:15.965762 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" event={"ID":"603593aa-5382-4de8-9305-baef3f2e1914","Type":"ContainerDied","Data":"34588f282b7a792048e1027c24f11bf16423cae552fb3e35ef45a0ec8dfdf237"} Mar 08 22:22:15.965845 master-0 kubenswrapper[29458]: I0308 22:22:15.965832 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34588f282b7a792048e1027c24f11bf16423cae552fb3e35ef45a0ec8dfdf237" Mar 08 22:22:15.965845 master-0 kubenswrapper[29458]: I0308 22:22:15.965841 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4l2bqh" Mar 08 22:22:16.358560 master-0 kubenswrapper[29458]: I0308 22:22:16.358481 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:16.529415 master-0 kubenswrapper[29458]: I0308 22:22:16.529293 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfp4h\" (UniqueName: \"kubernetes.io/projected/1d532dd3-ce06-416d-98ba-244d534c2225-kube-api-access-jfp4h\") pod \"1d532dd3-ce06-416d-98ba-244d534c2225\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " Mar 08 22:22:16.529415 master-0 kubenswrapper[29458]: I0308 22:22:16.529379 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-bundle\") pod \"1d532dd3-ce06-416d-98ba-244d534c2225\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " Mar 08 22:22:16.529415 master-0 kubenswrapper[29458]: I0308 22:22:16.529451 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-util\") pod \"1d532dd3-ce06-416d-98ba-244d534c2225\" (UID: \"1d532dd3-ce06-416d-98ba-244d534c2225\") " Mar 08 22:22:16.530884 master-0 kubenswrapper[29458]: I0308 22:22:16.530789 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-bundle" (OuterVolumeSpecName: "bundle") pod "1d532dd3-ce06-416d-98ba-244d534c2225" (UID: "1d532dd3-ce06-416d-98ba-244d534c2225"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:16.544154 master-0 kubenswrapper[29458]: I0308 22:22:16.535264 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d532dd3-ce06-416d-98ba-244d534c2225-kube-api-access-jfp4h" (OuterVolumeSpecName: "kube-api-access-jfp4h") pod "1d532dd3-ce06-416d-98ba-244d534c2225" (UID: "1d532dd3-ce06-416d-98ba-244d534c2225"). InnerVolumeSpecName "kube-api-access-jfp4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:22:16.561108 master-0 kubenswrapper[29458]: I0308 22:22:16.560929 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-util" (OuterVolumeSpecName: "util") pod "1d532dd3-ce06-416d-98ba-244d534c2225" (UID: "1d532dd3-ce06-416d-98ba-244d534c2225"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:16.632015 master-0 kubenswrapper[29458]: I0308 22:22:16.631824 29458 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-util\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:16.632015 master-0 kubenswrapper[29458]: I0308 22:22:16.631888 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfp4h\" (UniqueName: \"kubernetes.io/projected/1d532dd3-ce06-416d-98ba-244d534c2225-kube-api-access-jfp4h\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:16.632015 master-0 kubenswrapper[29458]: I0308 22:22:16.631903 29458 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d532dd3-ce06-416d-98ba-244d534c2225-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:16.980944 master-0 kubenswrapper[29458]: I0308 22:22:16.980775 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" Mar 08 22:22:16.983599 master-0 kubenswrapper[29458]: I0308 22:22:16.983544 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3110f839-30af-42b0-87a0-39ae9db0da4f" path="/var/lib/kubelet/pods/3110f839-30af-42b0-87a0-39ae9db0da4f/volumes" Mar 08 22:22:16.984579 master-0 kubenswrapper[29458]: I0308 22:22:16.984540 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82smqh8" event={"ID":"1d532dd3-ce06-416d-98ba-244d534c2225","Type":"ContainerDied","Data":"b8f9d371573a2ba2bde2e4d44b1565e675b87d43e8812bfe7f8fd1ce86278813"} Mar 08 22:22:16.984579 master-0 kubenswrapper[29458]: I0308 22:22:16.984584 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f9d371573a2ba2bde2e4d44b1565e675b87d43e8812bfe7f8fd1ce86278813" Mar 08 22:22:18.013035 master-0 kubenswrapper[29458]: I0308 22:22:18.012918 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq"] Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013492 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="util" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013525 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="util" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013563 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013578 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013596 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="pull" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013675 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="pull" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013713 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="pull" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013727 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="pull" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013754 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013772 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013802 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3110f839-30af-42b0-87a0-39ae9db0da4f" containerName="console" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013821 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="3110f839-30af-42b0-87a0-39ae9db0da4f" containerName="console" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013864 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="pull" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013878 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="pull" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013907 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="util" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013923 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="util" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013945 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="util" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.013960 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="util" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: E0308 22:22:18.013988 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.014001 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.014304 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="3110f839-30af-42b0-87a0-39ae9db0da4f" containerName="console" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.014331 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="603593aa-5382-4de8-9305-baef3f2e1914" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.014350 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d532dd3-ce06-416d-98ba-244d534c2225" containerName="extract" Mar 08 22:22:18.014462 master-0 kubenswrapper[29458]: I0308 22:22:18.014381 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e81c1b-ad7f-44bb-ac49-856c38df992f" containerName="extract" Mar 08 22:22:18.018442 master-0 kubenswrapper[29458]: I0308 22:22:18.016384 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.026555 master-0 kubenswrapper[29458]: I0308 22:22:18.025684 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-26ndz" Mar 08 22:22:18.061676 master-0 kubenswrapper[29458]: I0308 22:22:18.061616 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq"] Mar 08 22:22:18.067654 master-0 kubenswrapper[29458]: I0308 22:22:18.067528 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.068263 master-0 kubenswrapper[29458]: I0308 22:22:18.068219 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68552\" (UniqueName: \"kubernetes.io/projected/907771e2-a521-407a-9346-e8e41df482e5-kube-api-access-68552\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.069095 master-0 kubenswrapper[29458]: I0308 22:22:18.068981 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.170527 master-0 kubenswrapper[29458]: I0308 22:22:18.170406 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.170527 master-0 kubenswrapper[29458]: I0308 22:22:18.170513 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68552\" (UniqueName: \"kubernetes.io/projected/907771e2-a521-407a-9346-e8e41df482e5-kube-api-access-68552\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.171107 master-0 kubenswrapper[29458]: I0308 22:22:18.171028 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.171615 master-0 kubenswrapper[29458]: I0308 22:22:18.171491 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.171615 master-0 kubenswrapper[29458]: I0308 22:22:18.171541 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.203772 master-0 kubenswrapper[29458]: I0308 22:22:18.203665 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68552\" (UniqueName: \"kubernetes.io/projected/907771e2-a521-407a-9346-e8e41df482e5-kube-api-access-68552\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.368143 master-0 kubenswrapper[29458]: I0308 22:22:18.368005 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:18.989841 master-0 kubenswrapper[29458]: I0308 22:22:18.989756 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq"] Mar 08 22:22:19.012447 master-0 kubenswrapper[29458]: I0308 22:22:19.012326 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" event={"ID":"907771e2-a521-407a-9346-e8e41df482e5","Type":"ContainerStarted","Data":"d830b5a6ed78aba87b11d90f6d312f635d1bffbce73c62fbd7c9635f3a27893f"} Mar 08 22:22:20.021781 master-0 kubenswrapper[29458]: I0308 22:22:20.021729 29458 generic.go:334] "Generic (PLEG): container finished" podID="907771e2-a521-407a-9346-e8e41df482e5" containerID="020945b4dd2fdc7e47251a5692e58cc4765e8bf27336c275360397c4578c71ba" exitCode=0 Mar 08 22:22:20.022860 master-0 kubenswrapper[29458]: I0308 22:22:20.021964 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" event={"ID":"907771e2-a521-407a-9346-e8e41df482e5","Type":"ContainerDied","Data":"020945b4dd2fdc7e47251a5692e58cc4765e8bf27336c275360397c4578c71ba"} Mar 08 22:22:21.336306 master-0 kubenswrapper[29458]: I0308 22:22:21.336220 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z"] Mar 08 22:22:21.337423 master-0 kubenswrapper[29458]: I0308 22:22:21.337388 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.340499 master-0 kubenswrapper[29458]: I0308 22:22:21.340122 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 08 22:22:21.340499 master-0 kubenswrapper[29458]: I0308 22:22:21.340292 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 08 22:22:21.353507 master-0 kubenswrapper[29458]: I0308 22:22:21.352765 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z"] Mar 08 22:22:21.533941 master-0 kubenswrapper[29458]: I0308 22:22:21.533742 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7cz\" (UniqueName: \"kubernetes.io/projected/caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9-kube-api-access-sc7cz\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hxf8z\" (UID: \"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.533941 master-0 kubenswrapper[29458]: I0308 22:22:21.533901 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hxf8z\" (UID: \"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.635527 master-0 kubenswrapper[29458]: I0308 22:22:21.635431 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7cz\" (UniqueName: \"kubernetes.io/projected/caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9-kube-api-access-sc7cz\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hxf8z\" (UID: \"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.635833 master-0 kubenswrapper[29458]: I0308 22:22:21.635671 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hxf8z\" (UID: \"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.636472 master-0 kubenswrapper[29458]: I0308 22:22:21.636403 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hxf8z\" (UID: \"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.655486 master-0 kubenswrapper[29458]: I0308 22:22:21.655418 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7cz\" (UniqueName: \"kubernetes.io/projected/caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9-kube-api-access-sc7cz\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hxf8z\" (UID: \"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:21.955304 master-0 kubenswrapper[29458]: I0308 22:22:21.955219 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" Mar 08 22:22:22.049501 master-0 kubenswrapper[29458]: I0308 22:22:22.048699 29458 generic.go:334] "Generic (PLEG): container finished" podID="907771e2-a521-407a-9346-e8e41df482e5" containerID="689a2ec24ea71567e24f1155b1b85a1e080cb93fae46fb0cebe7290771c8a3bb" exitCode=0 Mar 08 22:22:22.049501 master-0 kubenswrapper[29458]: I0308 22:22:22.048767 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" event={"ID":"907771e2-a521-407a-9346-e8e41df482e5","Type":"ContainerDied","Data":"689a2ec24ea71567e24f1155b1b85a1e080cb93fae46fb0cebe7290771c8a3bb"} Mar 08 22:22:22.482050 master-0 kubenswrapper[29458]: I0308 22:22:22.481985 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z"] Mar 08 22:22:23.063698 master-0 kubenswrapper[29458]: I0308 22:22:23.063493 29458 generic.go:334] "Generic (PLEG): container finished" podID="907771e2-a521-407a-9346-e8e41df482e5" containerID="0b84a1559439d710b6cd283fa924f0bbbd0d8ae3c76b78f163bafe11eb9bc0dd" exitCode=0 Mar 08 22:22:23.063698 master-0 kubenswrapper[29458]: I0308 22:22:23.063617 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" event={"ID":"907771e2-a521-407a-9346-e8e41df482e5","Type":"ContainerDied","Data":"0b84a1559439d710b6cd283fa924f0bbbd0d8ae3c76b78f163bafe11eb9bc0dd"} Mar 08 22:22:23.065748 master-0 kubenswrapper[29458]: I0308 22:22:23.065685 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" event={"ID":"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9","Type":"ContainerStarted","Data":"6a5ff98a60e971922742d3119104545dff959c0a44f2dd4d59d9969d65c4b69d"} Mar 08 22:22:24.411105 master-0 kubenswrapper[29458]: I0308 22:22:24.411056 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:24.448905 master-0 kubenswrapper[29458]: I0308 22:22:24.448695 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-bundle\") pod \"907771e2-a521-407a-9346-e8e41df482e5\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " Mar 08 22:22:24.448905 master-0 kubenswrapper[29458]: I0308 22:22:24.448769 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-util\") pod \"907771e2-a521-407a-9346-e8e41df482e5\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " Mar 08 22:22:24.448905 master-0 kubenswrapper[29458]: I0308 22:22:24.448811 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68552\" (UniqueName: \"kubernetes.io/projected/907771e2-a521-407a-9346-e8e41df482e5-kube-api-access-68552\") pod \"907771e2-a521-407a-9346-e8e41df482e5\" (UID: \"907771e2-a521-407a-9346-e8e41df482e5\") " Mar 08 22:22:24.450997 master-0 kubenswrapper[29458]: I0308 22:22:24.450962 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-bundle" (OuterVolumeSpecName: "bundle") pod "907771e2-a521-407a-9346-e8e41df482e5" (UID: "907771e2-a521-407a-9346-e8e41df482e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:24.455011 master-0 kubenswrapper[29458]: I0308 22:22:24.454724 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907771e2-a521-407a-9346-e8e41df482e5-kube-api-access-68552" (OuterVolumeSpecName: "kube-api-access-68552") pod "907771e2-a521-407a-9346-e8e41df482e5" (UID: "907771e2-a521-407a-9346-e8e41df482e5"). InnerVolumeSpecName "kube-api-access-68552". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:22:24.464657 master-0 kubenswrapper[29458]: I0308 22:22:24.464612 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-util" (OuterVolumeSpecName: "util") pod "907771e2-a521-407a-9346-e8e41df482e5" (UID: "907771e2-a521-407a-9346-e8e41df482e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 22:22:24.551493 master-0 kubenswrapper[29458]: I0308 22:22:24.551378 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68552\" (UniqueName: \"kubernetes.io/projected/907771e2-a521-407a-9346-e8e41df482e5-kube-api-access-68552\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:24.551493 master-0 kubenswrapper[29458]: I0308 22:22:24.551461 29458 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:24.551493 master-0 kubenswrapper[29458]: I0308 22:22:24.551498 29458 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/907771e2-a521-407a-9346-e8e41df482e5-util\") on node \"master-0\" DevicePath \"\"" Mar 08 22:22:25.084816 master-0 kubenswrapper[29458]: I0308 22:22:25.084736 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" event={"ID":"907771e2-a521-407a-9346-e8e41df482e5","Type":"ContainerDied","Data":"d830b5a6ed78aba87b11d90f6d312f635d1bffbce73c62fbd7c9635f3a27893f"} Mar 08 22:22:25.084816 master-0 kubenswrapper[29458]: I0308 22:22:25.084814 29458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d830b5a6ed78aba87b11d90f6d312f635d1bffbce73c62fbd7c9635f3a27893f" Mar 08 22:22:25.085102 master-0 kubenswrapper[29458]: I0308 22:22:25.084869 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ztjmq" Mar 08 22:22:28.109322 master-0 kubenswrapper[29458]: I0308 22:22:28.109257 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" event={"ID":"caf4ffac-ec6a-4abf-8b2a-4ebf7948d8a9","Type":"ContainerStarted","Data":"d90612d2a45caba043af12189c8853807807d3c96cef8e199339df22f1322f13"} Mar 08 22:22:28.160373 master-0 kubenswrapper[29458]: I0308 22:22:28.160275 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hxf8z" podStartSLOduration=2.638815623 podStartE2EDuration="7.160252011s" podCreationTimestamp="2026-03-08 22:22:21 +0000 UTC" firstStartedPulling="2026-03-08 22:22:22.47775286 +0000 UTC m=+511.765810452" lastFinishedPulling="2026-03-08 22:22:26.999189238 +0000 UTC m=+516.287246840" observedRunningTime="2026-03-08 22:22:28.15591712 +0000 UTC m=+517.443974712" watchObservedRunningTime="2026-03-08 22:22:28.160252011 +0000 UTC m=+517.448309603" Mar 08 22:22:31.380795 master-0 kubenswrapper[29458]: I0308 22:22:31.380709 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-h8k6g"] Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: E0308 22:22:31.381050 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="extract" Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: I0308 22:22:31.381064 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="extract" Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: E0308 22:22:31.381096 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="pull" Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: I0308 22:22:31.381104 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="pull" Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: E0308 22:22:31.381131 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="util" Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: I0308 22:22:31.381139 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="util" Mar 08 22:22:31.381368 master-0 kubenswrapper[29458]: I0308 22:22:31.381280 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="907771e2-a521-407a-9346-e8e41df482e5" containerName="extract" Mar 08 22:22:31.381811 master-0 kubenswrapper[29458]: I0308 22:22:31.381788 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.384556 master-0 kubenswrapper[29458]: I0308 22:22:31.384517 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 08 22:22:31.386206 master-0 kubenswrapper[29458]: I0308 22:22:31.386184 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 08 22:22:31.401908 master-0 kubenswrapper[29458]: I0308 22:22:31.401846 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-h8k6g"] Mar 08 22:22:31.480608 master-0 kubenswrapper[29458]: I0308 22:22:31.480542 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051edafe-c623-4173-b150-e4d1d5348042-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-h8k6g\" (UID: \"051edafe-c623-4173-b150-e4d1d5348042\") " pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.480861 master-0 kubenswrapper[29458]: I0308 22:22:31.480659 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xndrk\" (UniqueName: \"kubernetes.io/projected/051edafe-c623-4173-b150-e4d1d5348042-kube-api-access-xndrk\") pod \"cert-manager-webhook-6888856db4-h8k6g\" (UID: \"051edafe-c623-4173-b150-e4d1d5348042\") " pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.582436 master-0 kubenswrapper[29458]: I0308 22:22:31.582370 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xndrk\" (UniqueName: \"kubernetes.io/projected/051edafe-c623-4173-b150-e4d1d5348042-kube-api-access-xndrk\") pod \"cert-manager-webhook-6888856db4-h8k6g\" (UID: \"051edafe-c623-4173-b150-e4d1d5348042\") " pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.582709 master-0 kubenswrapper[29458]: I0308 22:22:31.582521 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051edafe-c623-4173-b150-e4d1d5348042-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-h8k6g\" (UID: \"051edafe-c623-4173-b150-e4d1d5348042\") " pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.599335 master-0 kubenswrapper[29458]: I0308 22:22:31.599281 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051edafe-c623-4173-b150-e4d1d5348042-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-h8k6g\" (UID: \"051edafe-c623-4173-b150-e4d1d5348042\") " pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.617193 master-0 kubenswrapper[29458]: I0308 22:22:31.617141 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xndrk\" (UniqueName: \"kubernetes.io/projected/051edafe-c623-4173-b150-e4d1d5348042-kube-api-access-xndrk\") pod \"cert-manager-webhook-6888856db4-h8k6g\" (UID: \"051edafe-c623-4173-b150-e4d1d5348042\") " pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:31.708752 master-0 kubenswrapper[29458]: I0308 22:22:31.708594 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:32.217080 master-0 kubenswrapper[29458]: I0308 22:22:32.216987 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-h8k6g"] Mar 08 22:22:32.227166 master-0 kubenswrapper[29458]: W0308 22:22:32.224715 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod051edafe_c623_4173_b150_e4d1d5348042.slice/crio-4e0222f10d890b4d866877ae6f9bfe56df0bb44904a5d0397623cd984b5780da WatchSource:0}: Error finding container 4e0222f10d890b4d866877ae6f9bfe56df0bb44904a5d0397623cd984b5780da: Status 404 returned error can't find the container with id 4e0222f10d890b4d866877ae6f9bfe56df0bb44904a5d0397623cd984b5780da Mar 08 22:22:33.149373 master-0 kubenswrapper[29458]: I0308 22:22:33.149308 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" event={"ID":"051edafe-c623-4173-b150-e4d1d5348042","Type":"ContainerStarted","Data":"4e0222f10d890b4d866877ae6f9bfe56df0bb44904a5d0397623cd984b5780da"} Mar 08 22:22:33.523677 master-0 kubenswrapper[29458]: I0308 22:22:33.522932 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-42tmc"] Mar 08 22:22:33.525809 master-0 kubenswrapper[29458]: I0308 22:22:33.525780 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.542462 master-0 kubenswrapper[29458]: I0308 22:22:33.541755 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-42tmc"] Mar 08 22:22:33.637813 master-0 kubenswrapper[29458]: I0308 22:22:33.637692 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhp6\" (UniqueName: \"kubernetes.io/projected/d9541a7d-af57-4758-9884-addca466d304-kube-api-access-hfhp6\") pod \"cert-manager-cainjector-5545bd876-42tmc\" (UID: \"d9541a7d-af57-4758-9884-addca466d304\") " pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.647798 master-0 kubenswrapper[29458]: I0308 22:22:33.646764 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9541a7d-af57-4758-9884-addca466d304-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-42tmc\" (UID: \"d9541a7d-af57-4758-9884-addca466d304\") " pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.760855 master-0 kubenswrapper[29458]: I0308 22:22:33.758112 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9541a7d-af57-4758-9884-addca466d304-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-42tmc\" (UID: \"d9541a7d-af57-4758-9884-addca466d304\") " pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.761647 master-0 kubenswrapper[29458]: I0308 22:22:33.761622 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhp6\" (UniqueName: \"kubernetes.io/projected/d9541a7d-af57-4758-9884-addca466d304-kube-api-access-hfhp6\") pod \"cert-manager-cainjector-5545bd876-42tmc\" (UID: \"d9541a7d-af57-4758-9884-addca466d304\") " pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.783259 master-0 kubenswrapper[29458]: I0308 22:22:33.783031 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9541a7d-af57-4758-9884-addca466d304-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-42tmc\" (UID: \"d9541a7d-af57-4758-9884-addca466d304\") " pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.804632 master-0 kubenswrapper[29458]: I0308 22:22:33.804577 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhp6\" (UniqueName: \"kubernetes.io/projected/d9541a7d-af57-4758-9884-addca466d304-kube-api-access-hfhp6\") pod \"cert-manager-cainjector-5545bd876-42tmc\" (UID: \"d9541a7d-af57-4758-9884-addca466d304\") " pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:33.853250 master-0 kubenswrapper[29458]: I0308 22:22:33.850823 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" Mar 08 22:22:34.296819 master-0 kubenswrapper[29458]: W0308 22:22:34.296730 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9541a7d_af57_4758_9884_addca466d304.slice/crio-78aa908b4fe90c3a4cbcaec10255931edccc3c0edd7d7c11db10d42db1dfcfce WatchSource:0}: Error finding container 78aa908b4fe90c3a4cbcaec10255931edccc3c0edd7d7c11db10d42db1dfcfce: Status 404 returned error can't find the container with id 78aa908b4fe90c3a4cbcaec10255931edccc3c0edd7d7c11db10d42db1dfcfce Mar 08 22:22:34.300168 master-0 kubenswrapper[29458]: I0308 22:22:34.299477 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-42tmc"] Mar 08 22:22:34.536665 master-0 kubenswrapper[29458]: I0308 22:22:34.536481 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84"] Mar 08 22:22:34.537703 master-0 kubenswrapper[29458]: I0308 22:22:34.537678 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" Mar 08 22:22:34.543229 master-0 kubenswrapper[29458]: I0308 22:22:34.543036 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 08 22:22:34.544831 master-0 kubenswrapper[29458]: I0308 22:22:34.544800 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 08 22:22:34.561220 master-0 kubenswrapper[29458]: I0308 22:22:34.561136 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84"] Mar 08 22:22:34.688180 master-0 kubenswrapper[29458]: I0308 22:22:34.687721 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkkrf\" (UniqueName: \"kubernetes.io/projected/27c5d428-da77-44b1-ac04-fb0efdb376dc-kube-api-access-dkkrf\") pod \"nmstate-operator-75c5dccd6c-8zc84\" (UID: \"27c5d428-da77-44b1-ac04-fb0efdb376dc\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" Mar 08 22:22:34.789748 master-0 kubenswrapper[29458]: I0308 22:22:34.789672 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkkrf\" (UniqueName: \"kubernetes.io/projected/27c5d428-da77-44b1-ac04-fb0efdb376dc-kube-api-access-dkkrf\") pod \"nmstate-operator-75c5dccd6c-8zc84\" (UID: \"27c5d428-da77-44b1-ac04-fb0efdb376dc\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" Mar 08 22:22:34.908053 master-0 kubenswrapper[29458]: I0308 22:22:34.907046 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkkrf\" (UniqueName: \"kubernetes.io/projected/27c5d428-da77-44b1-ac04-fb0efdb376dc-kube-api-access-dkkrf\") pod \"nmstate-operator-75c5dccd6c-8zc84\" (UID: \"27c5d428-da77-44b1-ac04-fb0efdb376dc\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" Mar 08 22:22:35.160195 master-0 kubenswrapper[29458]: I0308 22:22:35.159961 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" Mar 08 22:22:35.172743 master-0 kubenswrapper[29458]: I0308 22:22:35.172684 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" event={"ID":"d9541a7d-af57-4758-9884-addca466d304","Type":"ContainerStarted","Data":"78aa908b4fe90c3a4cbcaec10255931edccc3c0edd7d7c11db10d42db1dfcfce"} Mar 08 22:22:35.657221 master-0 kubenswrapper[29458]: I0308 22:22:35.652322 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84"] Mar 08 22:22:39.160946 master-0 kubenswrapper[29458]: I0308 22:22:39.160874 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh"] Mar 08 22:22:39.162342 master-0 kubenswrapper[29458]: I0308 22:22:39.162040 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.170515 master-0 kubenswrapper[29458]: I0308 22:22:39.167867 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 08 22:22:39.170515 master-0 kubenswrapper[29458]: I0308 22:22:39.168889 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 08 22:22:39.181803 master-0 kubenswrapper[29458]: I0308 22:22:39.181510 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 08 22:22:39.186192 master-0 kubenswrapper[29458]: I0308 22:22:39.185620 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 08 22:22:39.206126 master-0 kubenswrapper[29458]: I0308 22:22:39.206021 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh"] Mar 08 22:22:39.265989 master-0 kubenswrapper[29458]: I0308 22:22:39.263450 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" event={"ID":"27c5d428-da77-44b1-ac04-fb0efdb376dc","Type":"ContainerStarted","Data":"616d7c95f38a5851505635fb9a24046955474999414e40711f15323bc0b68465"} Mar 08 22:22:39.351162 master-0 kubenswrapper[29458]: I0308 22:22:39.348794 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84495ae1-bdc1-4937-bb22-afe16bc08fd8-webhook-cert\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.351162 master-0 kubenswrapper[29458]: I0308 22:22:39.348904 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84495ae1-bdc1-4937-bb22-afe16bc08fd8-apiservice-cert\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.351162 master-0 kubenswrapper[29458]: I0308 22:22:39.348934 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7287\" (UniqueName: \"kubernetes.io/projected/84495ae1-bdc1-4937-bb22-afe16bc08fd8-kube-api-access-n7287\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.455111 master-0 kubenswrapper[29458]: I0308 22:22:39.454060 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84495ae1-bdc1-4937-bb22-afe16bc08fd8-apiservice-cert\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.455111 master-0 kubenswrapper[29458]: I0308 22:22:39.454150 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7287\" (UniqueName: \"kubernetes.io/projected/84495ae1-bdc1-4937-bb22-afe16bc08fd8-kube-api-access-n7287\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.461096 master-0 kubenswrapper[29458]: I0308 22:22:39.456367 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84495ae1-bdc1-4937-bb22-afe16bc08fd8-webhook-cert\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.466227 master-0 kubenswrapper[29458]: I0308 22:22:39.461336 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84495ae1-bdc1-4937-bb22-afe16bc08fd8-webhook-cert\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.466227 master-0 kubenswrapper[29458]: I0308 22:22:39.465236 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84495ae1-bdc1-4937-bb22-afe16bc08fd8-apiservice-cert\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.499882 master-0 kubenswrapper[29458]: I0308 22:22:39.497147 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7287\" (UniqueName: \"kubernetes.io/projected/84495ae1-bdc1-4937-bb22-afe16bc08fd8-kube-api-access-n7287\") pod \"metallb-operator-controller-manager-86c8b99677-4n6kh\" (UID: \"84495ae1-bdc1-4937-bb22-afe16bc08fd8\") " pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.499882 master-0 kubenswrapper[29458]: I0308 22:22:39.498524 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:39.724893 master-0 kubenswrapper[29458]: I0308 22:22:39.724731 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8"] Mar 08 22:22:39.726353 master-0 kubenswrapper[29458]: I0308 22:22:39.726327 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.736904 master-0 kubenswrapper[29458]: I0308 22:22:39.736860 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 08 22:22:39.738017 master-0 kubenswrapper[29458]: I0308 22:22:39.738001 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 08 22:22:39.744779 master-0 kubenswrapper[29458]: I0308 22:22:39.743920 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8"] Mar 08 22:22:39.764109 master-0 kubenswrapper[29458]: I0308 22:22:39.763478 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-apiservice-cert\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.764109 master-0 kubenswrapper[29458]: I0308 22:22:39.763638 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsk54\" (UniqueName: \"kubernetes.io/projected/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-kube-api-access-wsk54\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.764109 master-0 kubenswrapper[29458]: I0308 22:22:39.763666 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-webhook-cert\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.864989 master-0 kubenswrapper[29458]: I0308 22:22:39.864661 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsk54\" (UniqueName: \"kubernetes.io/projected/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-kube-api-access-wsk54\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.865270 master-0 kubenswrapper[29458]: I0308 22:22:39.865136 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-webhook-cert\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.865270 master-0 kubenswrapper[29458]: I0308 22:22:39.865242 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-apiservice-cert\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.880095 master-0 kubenswrapper[29458]: I0308 22:22:39.872017 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-apiservice-cert\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.880095 master-0 kubenswrapper[29458]: I0308 22:22:39.872652 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-webhook-cert\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:39.935476 master-0 kubenswrapper[29458]: I0308 22:22:39.934243 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsk54\" (UniqueName: \"kubernetes.io/projected/d1841fae-ac9f-4d3d-94a6-a5787fa601b9-kube-api-access-wsk54\") pod \"metallb-operator-webhook-server-74d9f6c4f8-tprw8\" (UID: \"d1841fae-ac9f-4d3d-94a6-a5787fa601b9\") " pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:40.067549 master-0 kubenswrapper[29458]: I0308 22:22:40.067479 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:40.245691 master-0 kubenswrapper[29458]: I0308 22:22:40.244105 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh"] Mar 08 22:22:40.288590 master-0 kubenswrapper[29458]: I0308 22:22:40.288257 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" event={"ID":"84495ae1-bdc1-4937-bb22-afe16bc08fd8","Type":"ContainerStarted","Data":"b3a65b98a3d197c26846b4da3733d82125259a709f00b9424d6030fa62d7bad5"} Mar 08 22:22:40.295965 master-0 kubenswrapper[29458]: I0308 22:22:40.295906 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" event={"ID":"051edafe-c623-4173-b150-e4d1d5348042","Type":"ContainerStarted","Data":"57e8d35eaec6b96caac7c358265e5069a17c76dace2e86e5385cca9522f039fc"} Mar 08 22:22:40.296666 master-0 kubenswrapper[29458]: I0308 22:22:40.296646 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:40.342273 master-0 kubenswrapper[29458]: I0308 22:22:40.335411 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" event={"ID":"d9541a7d-af57-4758-9884-addca466d304","Type":"ContainerStarted","Data":"9a563e8a87709b1e50f0fb374f7145450ecd72bbe2a0edd0e6a5ec3d0195e813"} Mar 08 22:22:40.392364 master-0 kubenswrapper[29458]: I0308 22:22:40.392237 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-42tmc" podStartSLOduration=2.384024209 podStartE2EDuration="7.392213566s" podCreationTimestamp="2026-03-08 22:22:33 +0000 UTC" firstStartedPulling="2026-03-08 22:22:34.301229065 +0000 UTC m=+523.589286657" lastFinishedPulling="2026-03-08 22:22:39.309418422 +0000 UTC m=+528.597476014" observedRunningTime="2026-03-08 22:22:40.375526027 +0000 UTC m=+529.663583619" watchObservedRunningTime="2026-03-08 22:22:40.392213566 +0000 UTC m=+529.680271158" Mar 08 22:22:40.392646 master-0 kubenswrapper[29458]: I0308 22:22:40.392558 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" podStartSLOduration=2.335386944 podStartE2EDuration="9.392551584s" podCreationTimestamp="2026-03-08 22:22:31 +0000 UTC" firstStartedPulling="2026-03-08 22:22:32.226491551 +0000 UTC m=+521.514549143" lastFinishedPulling="2026-03-08 22:22:39.283656191 +0000 UTC m=+528.571713783" observedRunningTime="2026-03-08 22:22:40.326668373 +0000 UTC m=+529.614725965" watchObservedRunningTime="2026-03-08 22:22:40.392551584 +0000 UTC m=+529.680609176" Mar 08 22:22:40.588485 master-0 kubenswrapper[29458]: I0308 22:22:40.588419 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8"] Mar 08 22:22:40.612792 master-0 kubenswrapper[29458]: W0308 22:22:40.612646 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1841fae_ac9f_4d3d_94a6_a5787fa601b9.slice/crio-f127ba4fd5bd0b5f32e3ec6940c0b83a2b426db5f6166eb37f3b8315031d2d19 WatchSource:0}: Error finding container f127ba4fd5bd0b5f32e3ec6940c0b83a2b426db5f6166eb37f3b8315031d2d19: Status 404 returned error can't find the container with id f127ba4fd5bd0b5f32e3ec6940c0b83a2b426db5f6166eb37f3b8315031d2d19 Mar 08 22:22:41.364103 master-0 kubenswrapper[29458]: I0308 22:22:41.363081 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" event={"ID":"d1841fae-ac9f-4d3d-94a6-a5787fa601b9","Type":"ContainerStarted","Data":"f127ba4fd5bd0b5f32e3ec6940c0b83a2b426db5f6166eb37f3b8315031d2d19"} Mar 08 22:22:43.385108 master-0 kubenswrapper[29458]: I0308 22:22:43.384225 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" event={"ID":"27c5d428-da77-44b1-ac04-fb0efdb376dc","Type":"ContainerStarted","Data":"235ed29609667df3801732e294a03bf0bc89d7d8a5f3ea35e46f0865d44fd946"} Mar 08 22:22:43.414182 master-0 kubenswrapper[29458]: I0308 22:22:43.414066 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8zc84" podStartSLOduration=5.433034277 podStartE2EDuration="9.414045788s" podCreationTimestamp="2026-03-08 22:22:34 +0000 UTC" firstStartedPulling="2026-03-08 22:22:39.139316805 +0000 UTC m=+528.427374397" lastFinishedPulling="2026-03-08 22:22:43.120328316 +0000 UTC m=+532.408385908" observedRunningTime="2026-03-08 22:22:43.412049616 +0000 UTC m=+532.700107208" watchObservedRunningTime="2026-03-08 22:22:43.414045788 +0000 UTC m=+532.702103380" Mar 08 22:22:46.711128 master-0 kubenswrapper[29458]: I0308 22:22:46.711024 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-h8k6g" Mar 08 22:22:50.350149 master-0 kubenswrapper[29458]: I0308 22:22:50.350057 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-9cwqt"] Mar 08 22:22:50.357171 master-0 kubenswrapper[29458]: I0308 22:22:50.357111 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.383420 master-0 kubenswrapper[29458]: I0308 22:22:50.383294 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9cwqt"] Mar 08 22:22:50.415513 master-0 kubenswrapper[29458]: I0308 22:22:50.415215 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b72a424-72fb-4dec-a32b-bff521f358f6-bound-sa-token\") pod \"cert-manager-545d4d4674-9cwqt\" (UID: \"5b72a424-72fb-4dec-a32b-bff521f358f6\") " pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.415513 master-0 kubenswrapper[29458]: I0308 22:22:50.415310 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtlg\" (UniqueName: \"kubernetes.io/projected/5b72a424-72fb-4dec-a32b-bff521f358f6-kube-api-access-qwtlg\") pod \"cert-manager-545d4d4674-9cwqt\" (UID: \"5b72a424-72fb-4dec-a32b-bff521f358f6\") " pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.517003 master-0 kubenswrapper[29458]: I0308 22:22:50.516938 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtlg\" (UniqueName: \"kubernetes.io/projected/5b72a424-72fb-4dec-a32b-bff521f358f6-kube-api-access-qwtlg\") pod \"cert-manager-545d4d4674-9cwqt\" (UID: \"5b72a424-72fb-4dec-a32b-bff521f358f6\") " pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.517379 master-0 kubenswrapper[29458]: I0308 22:22:50.517364 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b72a424-72fb-4dec-a32b-bff521f358f6-bound-sa-token\") pod \"cert-manager-545d4d4674-9cwqt\" (UID: \"5b72a424-72fb-4dec-a32b-bff521f358f6\") " pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.536208 master-0 kubenswrapper[29458]: I0308 22:22:50.536161 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b72a424-72fb-4dec-a32b-bff521f358f6-bound-sa-token\") pod \"cert-manager-545d4d4674-9cwqt\" (UID: \"5b72a424-72fb-4dec-a32b-bff521f358f6\") " pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.563104 master-0 kubenswrapper[29458]: I0308 22:22:50.557707 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtlg\" (UniqueName: \"kubernetes.io/projected/5b72a424-72fb-4dec-a32b-bff521f358f6-kube-api-access-qwtlg\") pod \"cert-manager-545d4d4674-9cwqt\" (UID: \"5b72a424-72fb-4dec-a32b-bff521f358f6\") " pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:50.733838 master-0 kubenswrapper[29458]: I0308 22:22:50.732580 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9cwqt" Mar 08 22:22:51.188874 master-0 kubenswrapper[29458]: W0308 22:22:51.188795 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b72a424_72fb_4dec_a32b_bff521f358f6.slice/crio-db17bf30b8582eb5b12efb5ed6008039b274bd283ac07a641025684945804a5a WatchSource:0}: Error finding container db17bf30b8582eb5b12efb5ed6008039b274bd283ac07a641025684945804a5a: Status 404 returned error can't find the container with id db17bf30b8582eb5b12efb5ed6008039b274bd283ac07a641025684945804a5a Mar 08 22:22:51.189150 master-0 kubenswrapper[29458]: I0308 22:22:51.189015 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9cwqt"] Mar 08 22:22:51.498109 master-0 kubenswrapper[29458]: I0308 22:22:51.497911 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" event={"ID":"d1841fae-ac9f-4d3d-94a6-a5787fa601b9","Type":"ContainerStarted","Data":"e2218580a49becd5d5600a702f986bdd4f3117c77d29fb856cc723a411381b5e"} Mar 08 22:22:51.498109 master-0 kubenswrapper[29458]: I0308 22:22:51.498025 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:22:51.499592 master-0 kubenswrapper[29458]: I0308 22:22:51.499549 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9cwqt" event={"ID":"5b72a424-72fb-4dec-a32b-bff521f358f6","Type":"ContainerStarted","Data":"03385b21878435f1f74d50083e6ccd1cbd2e39e9f4363fa155865e3082a8d637"} Mar 08 22:22:51.499660 master-0 kubenswrapper[29458]: I0308 22:22:51.499600 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9cwqt" event={"ID":"5b72a424-72fb-4dec-a32b-bff521f358f6","Type":"ContainerStarted","Data":"db17bf30b8582eb5b12efb5ed6008039b274bd283ac07a641025684945804a5a"} Mar 08 22:22:51.501372 master-0 kubenswrapper[29458]: I0308 22:22:51.501322 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" event={"ID":"84495ae1-bdc1-4937-bb22-afe16bc08fd8","Type":"ContainerStarted","Data":"9b2533d683fe7a6636710ad6a2c89248ce1d653980c1e5097e7c9f0eddc007fe"} Mar 08 22:22:51.501496 master-0 kubenswrapper[29458]: I0308 22:22:51.501469 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:22:51.531214 master-0 kubenswrapper[29458]: I0308 22:22:51.529352 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" podStartSLOduration=2.358133878 podStartE2EDuration="12.529321806s" podCreationTimestamp="2026-03-08 22:22:39 +0000 UTC" firstStartedPulling="2026-03-08 22:22:40.616994187 +0000 UTC m=+529.905051779" lastFinishedPulling="2026-03-08 22:22:50.788182115 +0000 UTC m=+540.076239707" observedRunningTime="2026-03-08 22:22:51.525825946 +0000 UTC m=+540.813883538" watchObservedRunningTime="2026-03-08 22:22:51.529321806 +0000 UTC m=+540.817379408" Mar 08 22:22:51.554509 master-0 kubenswrapper[29458]: I0308 22:22:51.554385 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-9cwqt" podStartSLOduration=1.554354369 podStartE2EDuration="1.554354369s" podCreationTimestamp="2026-03-08 22:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:22:51.552786159 +0000 UTC m=+540.840843751" watchObservedRunningTime="2026-03-08 22:22:51.554354369 +0000 UTC m=+540.842412011" Mar 08 22:22:51.600129 master-0 kubenswrapper[29458]: I0308 22:22:51.598526 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" podStartSLOduration=2.090910665 podStartE2EDuration="12.598500002s" podCreationTimestamp="2026-03-08 22:22:39 +0000 UTC" firstStartedPulling="2026-03-08 22:22:40.246423631 +0000 UTC m=+529.534481223" lastFinishedPulling="2026-03-08 22:22:50.754012968 +0000 UTC m=+540.042070560" observedRunningTime="2026-03-08 22:22:51.58713387 +0000 UTC m=+540.875191472" watchObservedRunningTime="2026-03-08 22:22:51.598500002 +0000 UTC m=+540.886557594" Mar 08 22:22:52.516123 master-0 kubenswrapper[29458]: I0308 22:22:52.510945 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9"] Mar 08 22:22:52.516123 master-0 kubenswrapper[29458]: I0308 22:22:52.515466 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" Mar 08 22:22:52.532060 master-0 kubenswrapper[29458]: I0308 22:22:52.527209 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9"] Mar 08 22:22:52.532060 master-0 kubenswrapper[29458]: I0308 22:22:52.530144 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 08 22:22:52.532060 master-0 kubenswrapper[29458]: I0308 22:22:52.530322 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 08 22:22:52.641534 master-0 kubenswrapper[29458]: I0308 22:22:52.641030 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4"] Mar 08 22:22:52.642139 master-0 kubenswrapper[29458]: I0308 22:22:52.642116 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.648268 master-0 kubenswrapper[29458]: I0308 22:22:52.645446 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 08 22:22:52.659097 master-0 kubenswrapper[29458]: I0308 22:22:52.656045 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhf7t\" (UniqueName: \"kubernetes.io/projected/169b2fa3-6f45-47e1-ab7f-b20d21509fd1-kube-api-access-dhf7t\") pod \"obo-prometheus-operator-68bc856cb9-lpdb9\" (UID: \"169b2fa3-6f45-47e1-ab7f-b20d21509fd1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" Mar 08 22:22:52.664829 master-0 kubenswrapper[29458]: I0308 22:22:52.664156 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4"] Mar 08 22:22:52.682426 master-0 kubenswrapper[29458]: I0308 22:22:52.682346 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt"] Mar 08 22:22:52.684145 master-0 kubenswrapper[29458]: I0308 22:22:52.683763 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:52.748778 master-0 kubenswrapper[29458]: I0308 22:22:52.748315 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt"] Mar 08 22:22:52.759267 master-0 kubenswrapper[29458]: I0308 22:22:52.757990 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44d7e70d-19bd-415a-927f-5fb224f58503-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4\" (UID: \"44d7e70d-19bd-415a-927f-5fb224f58503\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.759267 master-0 kubenswrapper[29458]: I0308 22:22:52.758117 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhf7t\" (UniqueName: \"kubernetes.io/projected/169b2fa3-6f45-47e1-ab7f-b20d21509fd1-kube-api-access-dhf7t\") pod \"obo-prometheus-operator-68bc856cb9-lpdb9\" (UID: \"169b2fa3-6f45-47e1-ab7f-b20d21509fd1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" Mar 08 22:22:52.759267 master-0 kubenswrapper[29458]: I0308 22:22:52.758215 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44d7e70d-19bd-415a-927f-5fb224f58503-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4\" (UID: \"44d7e70d-19bd-415a-927f-5fb224f58503\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.802199 master-0 kubenswrapper[29458]: I0308 22:22:52.790789 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhf7t\" (UniqueName: \"kubernetes.io/projected/169b2fa3-6f45-47e1-ab7f-b20d21509fd1-kube-api-access-dhf7t\") pod \"obo-prometheus-operator-68bc856cb9-lpdb9\" (UID: \"169b2fa3-6f45-47e1-ab7f-b20d21509fd1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" Mar 08 22:22:52.872731 master-0 kubenswrapper[29458]: I0308 22:22:52.870510 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" Mar 08 22:22:52.873539 master-0 kubenswrapper[29458]: I0308 22:22:52.873278 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80627f32-4511-45ff-8d5a-868930aa5ec9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt\" (UID: \"80627f32-4511-45ff-8d5a-868930aa5ec9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:52.873539 master-0 kubenswrapper[29458]: I0308 22:22:52.873352 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44d7e70d-19bd-415a-927f-5fb224f58503-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4\" (UID: \"44d7e70d-19bd-415a-927f-5fb224f58503\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.873539 master-0 kubenswrapper[29458]: I0308 22:22:52.873415 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44d7e70d-19bd-415a-927f-5fb224f58503-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4\" (UID: \"44d7e70d-19bd-415a-927f-5fb224f58503\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.873539 master-0 kubenswrapper[29458]: I0308 22:22:52.873456 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80627f32-4511-45ff-8d5a-868930aa5ec9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt\" (UID: \"80627f32-4511-45ff-8d5a-868930aa5ec9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:52.889332 master-0 kubenswrapper[29458]: I0308 22:22:52.889273 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4bvf2"] Mar 08 22:22:52.890163 master-0 kubenswrapper[29458]: I0308 22:22:52.890143 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44d7e70d-19bd-415a-927f-5fb224f58503-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4\" (UID: \"44d7e70d-19bd-415a-927f-5fb224f58503\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.890283 master-0 kubenswrapper[29458]: I0308 22:22:52.890208 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44d7e70d-19bd-415a-927f-5fb224f58503-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4\" (UID: \"44d7e70d-19bd-415a-927f-5fb224f58503\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:52.896787 master-0 kubenswrapper[29458]: I0308 22:22:52.892611 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:52.899640 master-0 kubenswrapper[29458]: I0308 22:22:52.898204 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 08 22:22:52.937156 master-0 kubenswrapper[29458]: I0308 22:22:52.937099 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4bvf2"] Mar 08 22:22:52.980133 master-0 kubenswrapper[29458]: I0308 22:22:52.978369 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80627f32-4511-45ff-8d5a-868930aa5ec9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt\" (UID: \"80627f32-4511-45ff-8d5a-868930aa5ec9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:52.980133 master-0 kubenswrapper[29458]: I0308 22:22:52.978492 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80627f32-4511-45ff-8d5a-868930aa5ec9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt\" (UID: \"80627f32-4511-45ff-8d5a-868930aa5ec9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:52.986406 master-0 kubenswrapper[29458]: I0308 22:22:52.986362 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80627f32-4511-45ff-8d5a-868930aa5ec9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt\" (UID: \"80627f32-4511-45ff-8d5a-868930aa5ec9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:52.992123 master-0 kubenswrapper[29458]: I0308 22:22:52.990574 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80627f32-4511-45ff-8d5a-868930aa5ec9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt\" (UID: \"80627f32-4511-45ff-8d5a-868930aa5ec9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:53.084350 master-0 kubenswrapper[29458]: I0308 22:22:53.083693 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k75g\" (UniqueName: \"kubernetes.io/projected/27d457fa-1c81-48a3-bc36-9e16146395b4-kube-api-access-8k75g\") pod \"observability-operator-59bdc8b94-4bvf2\" (UID: \"27d457fa-1c81-48a3-bc36-9e16146395b4\") " pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.084350 master-0 kubenswrapper[29458]: I0308 22:22:53.083754 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/27d457fa-1c81-48a3-bc36-9e16146395b4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4bvf2\" (UID: \"27d457fa-1c81-48a3-bc36-9e16146395b4\") " pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.113100 master-0 kubenswrapper[29458]: I0308 22:22:53.110335 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" Mar 08 22:22:53.113100 master-0 kubenswrapper[29458]: I0308 22:22:53.110825 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-h6ntr"] Mar 08 22:22:53.113100 master-0 kubenswrapper[29458]: I0308 22:22:53.112249 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.123865 master-0 kubenswrapper[29458]: I0308 22:22:53.117934 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-h6ntr"] Mar 08 22:22:53.200129 master-0 kubenswrapper[29458]: I0308 22:22:53.184941 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k75g\" (UniqueName: \"kubernetes.io/projected/27d457fa-1c81-48a3-bc36-9e16146395b4-kube-api-access-8k75g\") pod \"observability-operator-59bdc8b94-4bvf2\" (UID: \"27d457fa-1c81-48a3-bc36-9e16146395b4\") " pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.200129 master-0 kubenswrapper[29458]: I0308 22:22:53.184999 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/27d457fa-1c81-48a3-bc36-9e16146395b4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4bvf2\" (UID: \"27d457fa-1c81-48a3-bc36-9e16146395b4\") " pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.200129 master-0 kubenswrapper[29458]: I0308 22:22:53.191058 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/27d457fa-1c81-48a3-bc36-9e16146395b4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4bvf2\" (UID: \"27d457fa-1c81-48a3-bc36-9e16146395b4\") " pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.201032 master-0 kubenswrapper[29458]: I0308 22:22:53.200648 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" Mar 08 22:22:53.236135 master-0 kubenswrapper[29458]: I0308 22:22:53.236087 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k75g\" (UniqueName: \"kubernetes.io/projected/27d457fa-1c81-48a3-bc36-9e16146395b4-kube-api-access-8k75g\") pod \"observability-operator-59bdc8b94-4bvf2\" (UID: \"27d457fa-1c81-48a3-bc36-9e16146395b4\") " pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.286374 master-0 kubenswrapper[29458]: I0308 22:22:53.286269 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2caa34ff-b06b-412d-8b01-8113f9d814e0-openshift-service-ca\") pod \"perses-operator-5bf474d74f-h6ntr\" (UID: \"2caa34ff-b06b-412d-8b01-8113f9d814e0\") " pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.286374 master-0 kubenswrapper[29458]: I0308 22:22:53.286355 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmvk\" (UniqueName: \"kubernetes.io/projected/2caa34ff-b06b-412d-8b01-8113f9d814e0-kube-api-access-cpmvk\") pod \"perses-operator-5bf474d74f-h6ntr\" (UID: \"2caa34ff-b06b-412d-8b01-8113f9d814e0\") " pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.370152 master-0 kubenswrapper[29458]: I0308 22:22:53.370023 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:22:53.398185 master-0 kubenswrapper[29458]: I0308 22:22:53.391478 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmvk\" (UniqueName: \"kubernetes.io/projected/2caa34ff-b06b-412d-8b01-8113f9d814e0-kube-api-access-cpmvk\") pod \"perses-operator-5bf474d74f-h6ntr\" (UID: \"2caa34ff-b06b-412d-8b01-8113f9d814e0\") " pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.398185 master-0 kubenswrapper[29458]: I0308 22:22:53.391615 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2caa34ff-b06b-412d-8b01-8113f9d814e0-openshift-service-ca\") pod \"perses-operator-5bf474d74f-h6ntr\" (UID: \"2caa34ff-b06b-412d-8b01-8113f9d814e0\") " pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.398185 master-0 kubenswrapper[29458]: I0308 22:22:53.392789 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2caa34ff-b06b-412d-8b01-8113f9d814e0-openshift-service-ca\") pod \"perses-operator-5bf474d74f-h6ntr\" (UID: \"2caa34ff-b06b-412d-8b01-8113f9d814e0\") " pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.426171 master-0 kubenswrapper[29458]: W0308 22:22:53.424756 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod169b2fa3_6f45_47e1_ab7f_b20d21509fd1.slice/crio-ba75c74a7a1f002062332fd4e9e509c04b347b049ce23aeb6c6f5a2954448b71 WatchSource:0}: Error finding container ba75c74a7a1f002062332fd4e9e509c04b347b049ce23aeb6c6f5a2954448b71: Status 404 returned error can't find the container with id ba75c74a7a1f002062332fd4e9e509c04b347b049ce23aeb6c6f5a2954448b71 Mar 08 22:22:53.426171 master-0 kubenswrapper[29458]: I0308 22:22:53.425575 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9"] Mar 08 22:22:53.431438 master-0 kubenswrapper[29458]: I0308 22:22:53.431383 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmvk\" (UniqueName: \"kubernetes.io/projected/2caa34ff-b06b-412d-8b01-8113f9d814e0-kube-api-access-cpmvk\") pod \"perses-operator-5bf474d74f-h6ntr\" (UID: \"2caa34ff-b06b-412d-8b01-8113f9d814e0\") " pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.473497 master-0 kubenswrapper[29458]: I0308 22:22:53.470062 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:22:53.570140 master-0 kubenswrapper[29458]: I0308 22:22:53.568745 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" event={"ID":"169b2fa3-6f45-47e1-ab7f-b20d21509fd1","Type":"ContainerStarted","Data":"ba75c74a7a1f002062332fd4e9e509c04b347b049ce23aeb6c6f5a2954448b71"} Mar 08 22:22:53.734476 master-0 kubenswrapper[29458]: W0308 22:22:53.734364 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44d7e70d_19bd_415a_927f_5fb224f58503.slice/crio-dccf64fabcdf136f9357cf04920246211e076a49e096eb7f694f211b2d3be6f4 WatchSource:0}: Error finding container dccf64fabcdf136f9357cf04920246211e076a49e096eb7f694f211b2d3be6f4: Status 404 returned error can't find the container with id dccf64fabcdf136f9357cf04920246211e076a49e096eb7f694f211b2d3be6f4 Mar 08 22:22:53.745461 master-0 kubenswrapper[29458]: I0308 22:22:53.745408 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4"] Mar 08 22:22:53.862777 master-0 kubenswrapper[29458]: I0308 22:22:53.862596 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt"] Mar 08 22:22:53.976965 master-0 kubenswrapper[29458]: I0308 22:22:53.976926 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-h6ntr"] Mar 08 22:22:53.992086 master-0 kubenswrapper[29458]: I0308 22:22:53.988477 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4bvf2"] Mar 08 22:22:53.992561 master-0 kubenswrapper[29458]: W0308 22:22:53.992482 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27d457fa_1c81_48a3_bc36_9e16146395b4.slice/crio-feaff62c809a910149591ac99aad4bce559b8ba34810ce22757d2346e05f1db5 WatchSource:0}: Error finding container feaff62c809a910149591ac99aad4bce559b8ba34810ce22757d2346e05f1db5: Status 404 returned error can't find the container with id feaff62c809a910149591ac99aad4bce559b8ba34810ce22757d2346e05f1db5 Mar 08 22:22:54.582139 master-0 kubenswrapper[29458]: I0308 22:22:54.582018 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" event={"ID":"27d457fa-1c81-48a3-bc36-9e16146395b4","Type":"ContainerStarted","Data":"feaff62c809a910149591ac99aad4bce559b8ba34810ce22757d2346e05f1db5"} Mar 08 22:22:54.584196 master-0 kubenswrapper[29458]: I0308 22:22:54.584114 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" event={"ID":"2caa34ff-b06b-412d-8b01-8113f9d814e0","Type":"ContainerStarted","Data":"b5e0fbf4bc0ec8b0d2f2bd0237318bf5fb4f6f423448f6e054aa4c92d5d0bd35"} Mar 08 22:22:54.587557 master-0 kubenswrapper[29458]: I0308 22:22:54.586465 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" event={"ID":"80627f32-4511-45ff-8d5a-868930aa5ec9","Type":"ContainerStarted","Data":"4ed0d21e52aa0cc62db820450e0357d2046eb7114bf4cbb3106539e2e7adc1bc"} Mar 08 22:22:54.587861 master-0 kubenswrapper[29458]: I0308 22:22:54.587788 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" event={"ID":"44d7e70d-19bd-415a-927f-5fb224f58503","Type":"ContainerStarted","Data":"dccf64fabcdf136f9357cf04920246211e076a49e096eb7f694f211b2d3be6f4"} Mar 08 22:23:00.078433 master-0 kubenswrapper[29458]: I0308 22:23:00.078361 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-74d9f6c4f8-tprw8" Mar 08 22:23:04.702112 master-0 kubenswrapper[29458]: I0308 22:23:04.699864 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" event={"ID":"169b2fa3-6f45-47e1-ab7f-b20d21509fd1","Type":"ContainerStarted","Data":"808868975acdad5c44ac979a237dba0863063863384a11f66b4a774979a4656b"} Mar 08 22:23:04.703216 master-0 kubenswrapper[29458]: I0308 22:23:04.702993 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" event={"ID":"27d457fa-1c81-48a3-bc36-9e16146395b4","Type":"ContainerStarted","Data":"7b1f76b009ea33e966f123a52ccb404a5564beb8ca73c62745006b991add36cb"} Mar 08 22:23:04.703708 master-0 kubenswrapper[29458]: I0308 22:23:04.703393 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:23:04.705828 master-0 kubenswrapper[29458]: I0308 22:23:04.705464 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" Mar 08 22:23:04.705920 master-0 kubenswrapper[29458]: I0308 22:23:04.705823 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" event={"ID":"2caa34ff-b06b-412d-8b01-8113f9d814e0","Type":"ContainerStarted","Data":"2a00f30ad2a75edd0c95cc88824e74f8127c1aa25d35daa4ca10611e2f1847dd"} Mar 08 22:23:04.706001 master-0 kubenswrapper[29458]: I0308 22:23:04.705973 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:23:04.708139 master-0 kubenswrapper[29458]: I0308 22:23:04.708067 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" event={"ID":"80627f32-4511-45ff-8d5a-868930aa5ec9","Type":"ContainerStarted","Data":"891ec2cd5212c7b71e3c028a044d60770d2fb41b0bb626f0a615340b3ebf42bf"} Mar 08 22:23:04.710564 master-0 kubenswrapper[29458]: I0308 22:23:04.710514 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" event={"ID":"44d7e70d-19bd-415a-927f-5fb224f58503","Type":"ContainerStarted","Data":"74f3211f0daaf8f2825e159a6a18886cbbea795415ec0893fccaedee795a8b52"} Mar 08 22:23:04.738611 master-0 kubenswrapper[29458]: I0308 22:23:04.738492 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lpdb9" podStartSLOduration=2.65051113 podStartE2EDuration="12.738464742s" podCreationTimestamp="2026-03-08 22:22:52 +0000 UTC" firstStartedPulling="2026-03-08 22:22:53.428101622 +0000 UTC m=+542.716159214" lastFinishedPulling="2026-03-08 22:23:03.516055234 +0000 UTC m=+552.804112826" observedRunningTime="2026-03-08 22:23:04.73254398 +0000 UTC m=+554.020601612" watchObservedRunningTime="2026-03-08 22:23:04.738464742 +0000 UTC m=+554.026522364" Mar 08 22:23:04.771479 master-0 kubenswrapper[29458]: I0308 22:23:04.771061 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4bvf2" podStartSLOduration=3.215452797 podStartE2EDuration="12.771036388s" podCreationTimestamp="2026-03-08 22:22:52 +0000 UTC" firstStartedPulling="2026-03-08 22:22:53.998879468 +0000 UTC m=+543.286937060" lastFinishedPulling="2026-03-08 22:23:03.554463039 +0000 UTC m=+552.842520651" observedRunningTime="2026-03-08 22:23:04.764190622 +0000 UTC m=+554.052248214" watchObservedRunningTime="2026-03-08 22:23:04.771036388 +0000 UTC m=+554.059093980" Mar 08 22:23:04.817934 master-0 kubenswrapper[29458]: I0308 22:23:04.811662 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" podStartSLOduration=2.244948363 podStartE2EDuration="11.81164032s" podCreationTimestamp="2026-03-08 22:22:53 +0000 UTC" firstStartedPulling="2026-03-08 22:22:53.983349349 +0000 UTC m=+543.271406941" lastFinishedPulling="2026-03-08 22:23:03.550041306 +0000 UTC m=+552.838098898" observedRunningTime="2026-03-08 22:23:04.804801785 +0000 UTC m=+554.092859387" watchObservedRunningTime="2026-03-08 22:23:04.81164032 +0000 UTC m=+554.099697922" Mar 08 22:23:04.848520 master-0 kubenswrapper[29458]: I0308 22:23:04.847625 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-mz4dt" podStartSLOduration=3.214521012 podStartE2EDuration="12.847595104s" podCreationTimestamp="2026-03-08 22:22:52 +0000 UTC" firstStartedPulling="2026-03-08 22:22:53.879576234 +0000 UTC m=+543.167633826" lastFinishedPulling="2026-03-08 22:23:03.512650316 +0000 UTC m=+552.800707918" observedRunningTime="2026-03-08 22:23:04.844758891 +0000 UTC m=+554.132816493" watchObservedRunningTime="2026-03-08 22:23:04.847595104 +0000 UTC m=+554.135652716" Mar 08 22:23:04.898063 master-0 kubenswrapper[29458]: I0308 22:23:04.897955 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5c9c5b-6bhl4" podStartSLOduration=3.087674325 podStartE2EDuration="12.897927487s" podCreationTimestamp="2026-03-08 22:22:52 +0000 UTC" firstStartedPulling="2026-03-08 22:22:53.740454102 +0000 UTC m=+543.028511694" lastFinishedPulling="2026-03-08 22:23:03.550707264 +0000 UTC m=+552.838764856" observedRunningTime="2026-03-08 22:23:04.883345562 +0000 UTC m=+554.171403214" watchObservedRunningTime="2026-03-08 22:23:04.897927487 +0000 UTC m=+554.185985079" Mar 08 22:23:13.474146 master-0 kubenswrapper[29458]: I0308 22:23:13.474034 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-h6ntr" Mar 08 22:23:29.503853 master-0 kubenswrapper[29458]: I0308 22:23:29.503757 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-86c8b99677-4n6kh" Mar 08 22:23:37.763171 master-0 kubenswrapper[29458]: I0308 22:23:37.762790 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg"] Mar 08 22:23:37.763951 master-0 kubenswrapper[29458]: I0308 22:23:37.763801 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:37.774095 master-0 kubenswrapper[29458]: I0308 22:23:37.767585 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 08 22:23:37.822193 master-0 kubenswrapper[29458]: I0308 22:23:37.820888 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg"] Mar 08 22:23:37.846182 master-0 kubenswrapper[29458]: I0308 22:23:37.845949 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4sw9q"] Mar 08 22:23:37.850312 master-0 kubenswrapper[29458]: I0308 22:23:37.850248 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.857768 master-0 kubenswrapper[29458]: I0308 22:23:37.857695 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 08 22:23:37.858057 master-0 kubenswrapper[29458]: I0308 22:23:37.858026 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 08 22:23:37.885309 master-0 kubenswrapper[29458]: I0308 22:23:37.885267 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24429483-3579-4fd0-878d-7e7db6af4f65-cert\") pod \"frr-k8s-webhook-server-7f989f654f-l8xtg\" (UID: \"24429483-3579-4fd0-878d-7e7db6af4f65\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:37.885539 master-0 kubenswrapper[29458]: I0308 22:23:37.885515 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dmk\" (UniqueName: \"kubernetes.io/projected/24429483-3579-4fd0-878d-7e7db6af4f65-kube-api-access-s8dmk\") pod \"frr-k8s-webhook-server-7f989f654f-l8xtg\" (UID: \"24429483-3579-4fd0-878d-7e7db6af4f65\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:37.892647 master-0 kubenswrapper[29458]: I0308 22:23:37.892599 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ctk5s"] Mar 08 22:23:37.894598 master-0 kubenswrapper[29458]: I0308 22:23:37.894577 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ctk5s" Mar 08 22:23:37.907358 master-0 kubenswrapper[29458]: I0308 22:23:37.906249 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 08 22:23:37.907358 master-0 kubenswrapper[29458]: I0308 22:23:37.906551 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 08 22:23:37.907358 master-0 kubenswrapper[29458]: I0308 22:23:37.906690 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 08 22:23:37.907358 master-0 kubenswrapper[29458]: I0308 22:23:37.906786 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-xqsrg"] Mar 08 22:23:37.911119 master-0 kubenswrapper[29458]: I0308 22:23:37.909417 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:37.913844 master-0 kubenswrapper[29458]: I0308 22:23:37.911226 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 08 22:23:37.921383 master-0 kubenswrapper[29458]: I0308 22:23:37.920997 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-xqsrg"] Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988310 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-frr-conf\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988493 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-metrics-certs\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988548 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24429483-3579-4fd0-878d-7e7db6af4f65-cert\") pod \"frr-k8s-webhook-server-7f989f654f-l8xtg\" (UID: \"24429483-3579-4fd0-878d-7e7db6af4f65\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988578 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d686fce-80f1-40df-997c-d0273ef978f5-cert\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988611 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8dmk\" (UniqueName: \"kubernetes.io/projected/24429483-3579-4fd0-878d-7e7db6af4f65-kube-api-access-s8dmk\") pod \"frr-k8s-webhook-server-7f989f654f-l8xtg\" (UID: \"24429483-3579-4fd0-878d-7e7db6af4f65\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988635 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d686fce-80f1-40df-997c-d0273ef978f5-metrics-certs\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988667 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-metrics\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988693 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d696add8-6964-4b4a-b01e-a64f641cc597-frr-startup\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988740 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gtp8\" (UniqueName: \"kubernetes.io/projected/12233273-d6c6-4698-8d94-cd602da19788-kube-api-access-2gtp8\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988780 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988808 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-frr-sockets\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988846 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7vv\" (UniqueName: \"kubernetes.io/projected/d696add8-6964-4b4a-b01e-a64f641cc597-kube-api-access-vh7vv\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988871 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvb48\" (UniqueName: \"kubernetes.io/projected/0d686fce-80f1-40df-997c-d0273ef978f5-kube-api-access-bvb48\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988899 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/12233273-d6c6-4698-8d94-cd602da19788-metallb-excludel2\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988925 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-reloader\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.990170 master-0 kubenswrapper[29458]: I0308 22:23:37.988959 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d696add8-6964-4b4a-b01e-a64f641cc597-metrics-certs\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:37.998183 master-0 kubenswrapper[29458]: I0308 22:23:37.994912 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24429483-3579-4fd0-878d-7e7db6af4f65-cert\") pod \"frr-k8s-webhook-server-7f989f654f-l8xtg\" (UID: \"24429483-3579-4fd0-878d-7e7db6af4f65\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:38.020449 master-0 kubenswrapper[29458]: I0308 22:23:38.020064 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8dmk\" (UniqueName: \"kubernetes.io/projected/24429483-3579-4fd0-878d-7e7db6af4f65-kube-api-access-s8dmk\") pod \"frr-k8s-webhook-server-7f989f654f-l8xtg\" (UID: \"24429483-3579-4fd0-878d-7e7db6af4f65\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:38.090921 master-0 kubenswrapper[29458]: I0308 22:23:38.090820 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.091248 master-0 kubenswrapper[29458]: I0308 22:23:38.090986 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-frr-sockets\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.091248 master-0 kubenswrapper[29458]: I0308 22:23:38.091048 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvb48\" (UniqueName: \"kubernetes.io/projected/0d686fce-80f1-40df-997c-d0273ef978f5-kube-api-access-bvb48\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.091248 master-0 kubenswrapper[29458]: I0308 22:23:38.091124 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7vv\" (UniqueName: \"kubernetes.io/projected/d696add8-6964-4b4a-b01e-a64f641cc597-kube-api-access-vh7vv\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.091248 master-0 kubenswrapper[29458]: I0308 22:23:38.091162 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/12233273-d6c6-4698-8d94-cd602da19788-metallb-excludel2\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.091733 master-0 kubenswrapper[29458]: E0308 22:23:38.091684 29458 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 22:23:38.091818 master-0 kubenswrapper[29458]: I0308 22:23:38.091743 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-reloader\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.091818 master-0 kubenswrapper[29458]: E0308 22:23:38.091784 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist podName:12233273-d6c6-4698-8d94-cd602da19788 nodeName:}" failed. No retries permitted until 2026-03-08 22:23:38.591758204 +0000 UTC m=+587.879816016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist") pod "speaker-ctk5s" (UID: "12233273-d6c6-4698-8d94-cd602da19788") : secret "metallb-memberlist" not found Mar 08 22:23:38.091905 master-0 kubenswrapper[29458]: I0308 22:23:38.091843 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d696add8-6964-4b4a-b01e-a64f641cc597-metrics-certs\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.092045 master-0 kubenswrapper[29458]: I0308 22:23:38.092017 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-frr-conf\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.092177 master-0 kubenswrapper[29458]: I0308 22:23:38.091706 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-frr-sockets\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.092239 master-0 kubenswrapper[29458]: I0308 22:23:38.092157 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-metrics-certs\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.092296 master-0 kubenswrapper[29458]: I0308 22:23:38.092263 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d686fce-80f1-40df-997c-d0273ef978f5-cert\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.092338 master-0 kubenswrapper[29458]: I0308 22:23:38.092325 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d686fce-80f1-40df-997c-d0273ef978f5-metrics-certs\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.092387 master-0 kubenswrapper[29458]: I0308 22:23:38.092375 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-metrics\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.092425 master-0 kubenswrapper[29458]: I0308 22:23:38.092409 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d696add8-6964-4b4a-b01e-a64f641cc597-frr-startup\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.092502 master-0 kubenswrapper[29458]: I0308 22:23:38.092475 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-reloader\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.092601 master-0 kubenswrapper[29458]: I0308 22:23:38.092572 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gtp8\" (UniqueName: \"kubernetes.io/projected/12233273-d6c6-4698-8d94-cd602da19788-kube-api-access-2gtp8\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.092828 master-0 kubenswrapper[29458]: E0308 22:23:38.092269 29458 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 08 22:23:38.092828 master-0 kubenswrapper[29458]: E0308 22:23:38.092551 29458 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 08 22:23:38.092917 master-0 kubenswrapper[29458]: I0308 22:23:38.092849 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/12233273-d6c6-4698-8d94-cd602da19788-metallb-excludel2\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.093232 master-0 kubenswrapper[29458]: E0308 22:23:38.093199 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d696add8-6964-4b4a-b01e-a64f641cc597-metrics-certs podName:d696add8-6964-4b4a-b01e-a64f641cc597 nodeName:}" failed. No retries permitted until 2026-03-08 22:23:38.59315575 +0000 UTC m=+587.881213542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d696add8-6964-4b4a-b01e-a64f641cc597-metrics-certs") pod "frr-k8s-4sw9q" (UID: "d696add8-6964-4b4a-b01e-a64f641cc597") : secret "frr-k8s-certs-secret" not found Mar 08 22:23:38.093317 master-0 kubenswrapper[29458]: E0308 22:23:38.093271 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-metrics-certs podName:12233273-d6c6-4698-8d94-cd602da19788 nodeName:}" failed. No retries permitted until 2026-03-08 22:23:38.593258643 +0000 UTC m=+587.881316485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-metrics-certs") pod "speaker-ctk5s" (UID: "12233273-d6c6-4698-8d94-cd602da19788") : secret "speaker-certs-secret" not found Mar 08 22:23:38.093817 master-0 kubenswrapper[29458]: I0308 22:23:38.093763 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-metrics\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.093921 master-0 kubenswrapper[29458]: I0308 22:23:38.093868 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d696add8-6964-4b4a-b01e-a64f641cc597-frr-startup\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.094025 master-0 kubenswrapper[29458]: I0308 22:23:38.093970 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d696add8-6964-4b4a-b01e-a64f641cc597-frr-conf\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.097710 master-0 kubenswrapper[29458]: I0308 22:23:38.097450 29458 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 08 22:23:38.097710 master-0 kubenswrapper[29458]: I0308 22:23:38.097644 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d686fce-80f1-40df-997c-d0273ef978f5-metrics-certs\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.106668 master-0 kubenswrapper[29458]: I0308 22:23:38.106618 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d686fce-80f1-40df-997c-d0273ef978f5-cert\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.108781 master-0 kubenswrapper[29458]: I0308 22:23:38.108734 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:38.113492 master-0 kubenswrapper[29458]: I0308 22:23:38.113461 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gtp8\" (UniqueName: \"kubernetes.io/projected/12233273-d6c6-4698-8d94-cd602da19788-kube-api-access-2gtp8\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.116809 master-0 kubenswrapper[29458]: I0308 22:23:38.116757 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvb48\" (UniqueName: \"kubernetes.io/projected/0d686fce-80f1-40df-997c-d0273ef978f5-kube-api-access-bvb48\") pod \"controller-86ddb6bd46-xqsrg\" (UID: \"0d686fce-80f1-40df-997c-d0273ef978f5\") " pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.116940 master-0 kubenswrapper[29458]: I0308 22:23:38.116903 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7vv\" (UniqueName: \"kubernetes.io/projected/d696add8-6964-4b4a-b01e-a64f641cc597-kube-api-access-vh7vv\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.236096 master-0 kubenswrapper[29458]: I0308 22:23:38.235219 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:38.574524 master-0 kubenswrapper[29458]: I0308 22:23:38.574432 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg"] Mar 08 22:23:38.580515 master-0 kubenswrapper[29458]: W0308 22:23:38.580433 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24429483_3579_4fd0_878d_7e7db6af4f65.slice/crio-cfb8e22a963963fcbbd08ebca8037b291d02393cb99dad5ff75b29b6d8613503 WatchSource:0}: Error finding container cfb8e22a963963fcbbd08ebca8037b291d02393cb99dad5ff75b29b6d8613503: Status 404 returned error can't find the container with id cfb8e22a963963fcbbd08ebca8037b291d02393cb99dad5ff75b29b6d8613503 Mar 08 22:23:38.603666 master-0 kubenswrapper[29458]: I0308 22:23:38.603545 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-metrics-certs\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.603929 master-0 kubenswrapper[29458]: I0308 22:23:38.603835 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.604110 master-0 kubenswrapper[29458]: E0308 22:23:38.604029 29458 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 22:23:38.604235 master-0 kubenswrapper[29458]: E0308 22:23:38.604139 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist podName:12233273-d6c6-4698-8d94-cd602da19788 nodeName:}" failed. No retries permitted until 2026-03-08 22:23:39.60411454 +0000 UTC m=+588.892172172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist") pod "speaker-ctk5s" (UID: "12233273-d6c6-4698-8d94-cd602da19788") : secret "metallb-memberlist" not found Mar 08 22:23:38.605099 master-0 kubenswrapper[29458]: I0308 22:23:38.604905 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d696add8-6964-4b4a-b01e-a64f641cc597-metrics-certs\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.609254 master-0 kubenswrapper[29458]: I0308 22:23:38.609185 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-metrics-certs\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:38.610129 master-0 kubenswrapper[29458]: I0308 22:23:38.609989 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d696add8-6964-4b4a-b01e-a64f641cc597-metrics-certs\") pod \"frr-k8s-4sw9q\" (UID: \"d696add8-6964-4b4a-b01e-a64f641cc597\") " pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:38.673811 master-0 kubenswrapper[29458]: I0308 22:23:38.673739 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-xqsrg"] Mar 08 22:23:38.681650 master-0 kubenswrapper[29458]: W0308 22:23:38.681046 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d686fce_80f1_40df_997c_d0273ef978f5.slice/crio-baa716080b55336d24259d347bbf1c7f54c291265e7ec6402241b8167958a73f WatchSource:0}: Error finding container baa716080b55336d24259d347bbf1c7f54c291265e7ec6402241b8167958a73f: Status 404 returned error can't find the container with id baa716080b55336d24259d347bbf1c7f54c291265e7ec6402241b8167958a73f Mar 08 22:23:38.779724 master-0 kubenswrapper[29458]: I0308 22:23:38.779657 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:39.059374 master-0 kubenswrapper[29458]: I0308 22:23:39.058020 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" event={"ID":"24429483-3579-4fd0-878d-7e7db6af4f65","Type":"ContainerStarted","Data":"cfb8e22a963963fcbbd08ebca8037b291d02393cb99dad5ff75b29b6d8613503"} Mar 08 22:23:39.060136 master-0 kubenswrapper[29458]: I0308 22:23:39.059982 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"e19d6b132724a8e76d2b90aa617151dce9eba1f032a6837d4ee666da78333b13"} Mar 08 22:23:39.073232 master-0 kubenswrapper[29458]: I0308 22:23:39.065888 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-xqsrg" event={"ID":"0d686fce-80f1-40df-997c-d0273ef978f5","Type":"ContainerStarted","Data":"52da4b54922ff061fdb1bdad248339011c03d0c239b5aea7f27a44f2bacb4df3"} Mar 08 22:23:39.073232 master-0 kubenswrapper[29458]: I0308 22:23:39.065933 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-xqsrg" event={"ID":"0d686fce-80f1-40df-997c-d0273ef978f5","Type":"ContainerStarted","Data":"baa716080b55336d24259d347bbf1c7f54c291265e7ec6402241b8167958a73f"} Mar 08 22:23:39.630718 master-0 kubenswrapper[29458]: I0308 22:23:39.630626 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:39.634256 master-0 kubenswrapper[29458]: I0308 22:23:39.634186 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12233273-d6c6-4698-8d94-cd602da19788-memberlist\") pod \"speaker-ctk5s\" (UID: \"12233273-d6c6-4698-8d94-cd602da19788\") " pod="metallb-system/speaker-ctk5s" Mar 08 22:23:39.726846 master-0 kubenswrapper[29458]: I0308 22:23:39.726774 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ctk5s" Mar 08 22:23:39.757346 master-0 kubenswrapper[29458]: W0308 22:23:39.757280 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12233273_d6c6_4698_8d94_cd602da19788.slice/crio-dabdeb5a7ee8d2c48bccdc27be85439c6d015467185e70bba946422b5dbb9c75 WatchSource:0}: Error finding container dabdeb5a7ee8d2c48bccdc27be85439c6d015467185e70bba946422b5dbb9c75: Status 404 returned error can't find the container with id dabdeb5a7ee8d2c48bccdc27be85439c6d015467185e70bba946422b5dbb9c75 Mar 08 22:23:39.867269 master-0 kubenswrapper[29458]: I0308 22:23:39.860337 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-s6bc4"] Mar 08 22:23:39.867269 master-0 kubenswrapper[29458]: I0308 22:23:39.862227 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" Mar 08 22:23:39.883578 master-0 kubenswrapper[29458]: I0308 22:23:39.883427 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7"] Mar 08 22:23:39.886696 master-0 kubenswrapper[29458]: I0308 22:23:39.885064 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:39.901199 master-0 kubenswrapper[29458]: I0308 22:23:39.901112 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-s6bc4"] Mar 08 22:23:39.915506 master-0 kubenswrapper[29458]: I0308 22:23:39.915449 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 08 22:23:39.926165 master-0 kubenswrapper[29458]: I0308 22:23:39.926103 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-n7f29"] Mar 08 22:23:39.929083 master-0 kubenswrapper[29458]: I0308 22:23:39.927660 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:39.941169 master-0 kubenswrapper[29458]: I0308 22:23:39.941064 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7"] Mar 08 22:23:40.043091 master-0 kubenswrapper[29458]: I0308 22:23:40.043042 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzjm\" (UniqueName: \"kubernetes.io/projected/0732e393-4dfd-45bf-985c-b0a94d2af91c-kube-api-access-bfzjm\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.043318 master-0 kubenswrapper[29458]: I0308 22:23:40.043302 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-dbus-socket\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.043401 master-0 kubenswrapper[29458]: I0308 22:23:40.043389 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-nmstate-lock\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.043539 master-0 kubenswrapper[29458]: I0308 22:23:40.043525 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcnc\" (UniqueName: \"kubernetes.io/projected/89c5da4c-0428-45e1-b502-41245927f5af-kube-api-access-dwcnc\") pod \"nmstate-metrics-69594cc75-s6bc4\" (UID: \"89c5da4c-0428-45e1-b502-41245927f5af\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" Mar 08 22:23:40.043644 master-0 kubenswrapper[29458]: I0308 22:23:40.043628 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.043738 master-0 kubenswrapper[29458]: I0308 22:23:40.043723 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcsm\" (UniqueName: \"kubernetes.io/projected/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-kube-api-access-gtcsm\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.043851 master-0 kubenswrapper[29458]: I0308 22:23:40.043839 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-ovs-socket\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.083394 master-0 kubenswrapper[29458]: I0308 22:23:40.083184 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ctk5s" event={"ID":"12233273-d6c6-4698-8d94-cd602da19788","Type":"ContainerStarted","Data":"dabdeb5a7ee8d2c48bccdc27be85439c6d015467185e70bba946422b5dbb9c75"} Mar 08 22:23:40.124869 master-0 kubenswrapper[29458]: I0308 22:23:40.124809 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv"] Mar 08 22:23:40.126370 master-0 kubenswrapper[29458]: I0308 22:23:40.126351 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.130304 master-0 kubenswrapper[29458]: I0308 22:23:40.130276 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 08 22:23:40.130646 master-0 kubenswrapper[29458]: I0308 22:23:40.130633 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 08 22:23:40.146266 master-0 kubenswrapper[29458]: I0308 22:23:40.146161 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwcnc\" (UniqueName: \"kubernetes.io/projected/89c5da4c-0428-45e1-b502-41245927f5af-kube-api-access-dwcnc\") pod \"nmstate-metrics-69594cc75-s6bc4\" (UID: \"89c5da4c-0428-45e1-b502-41245927f5af\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" Mar 08 22:23:40.146433 master-0 kubenswrapper[29458]: I0308 22:23:40.146418 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.146572 master-0 kubenswrapper[29458]: I0308 22:23:40.146556 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcsm\" (UniqueName: \"kubernetes.io/projected/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-kube-api-access-gtcsm\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.146739 master-0 kubenswrapper[29458]: I0308 22:23:40.146725 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-ovs-socket\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.146848 master-0 kubenswrapper[29458]: I0308 22:23:40.146836 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzjm\" (UniqueName: \"kubernetes.io/projected/0732e393-4dfd-45bf-985c-b0a94d2af91c-kube-api-access-bfzjm\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.146958 master-0 kubenswrapper[29458]: I0308 22:23:40.146943 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-dbus-socket\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.147056 master-0 kubenswrapper[29458]: I0308 22:23:40.147044 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-nmstate-lock\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.149253 master-0 kubenswrapper[29458]: E0308 22:23:40.149132 29458 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 08 22:23:40.149253 master-0 kubenswrapper[29458]: I0308 22:23:40.149168 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-ovs-socket\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.149253 master-0 kubenswrapper[29458]: E0308 22:23:40.149203 29458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-tls-key-pair podName:5c7fca44-ee48-4a08-82ab-126dc4a4e7e5 nodeName:}" failed. No retries permitted until 2026-03-08 22:23:40.649185414 +0000 UTC m=+589.937243006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-tls-key-pair") pod "nmstate-webhook-786f45cff4-k9hr7" (UID: "5c7fca44-ee48-4a08-82ab-126dc4a4e7e5") : secret "openshift-nmstate-webhook" not found Mar 08 22:23:40.149253 master-0 kubenswrapper[29458]: I0308 22:23:40.149135 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-dbus-socket\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.149440 master-0 kubenswrapper[29458]: I0308 22:23:40.149329 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0732e393-4dfd-45bf-985c-b0a94d2af91c-nmstate-lock\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.160728 master-0 kubenswrapper[29458]: I0308 22:23:40.160672 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv"] Mar 08 22:23:40.166446 master-0 kubenswrapper[29458]: I0308 22:23:40.166404 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzjm\" (UniqueName: \"kubernetes.io/projected/0732e393-4dfd-45bf-985c-b0a94d2af91c-kube-api-access-bfzjm\") pod \"nmstate-handler-n7f29\" (UID: \"0732e393-4dfd-45bf-985c-b0a94d2af91c\") " pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.170695 master-0 kubenswrapper[29458]: I0308 22:23:40.170657 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcsm\" (UniqueName: \"kubernetes.io/projected/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-kube-api-access-gtcsm\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.171492 master-0 kubenswrapper[29458]: I0308 22:23:40.171471 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwcnc\" (UniqueName: \"kubernetes.io/projected/89c5da4c-0428-45e1-b502-41245927f5af-kube-api-access-dwcnc\") pod \"nmstate-metrics-69594cc75-s6bc4\" (UID: \"89c5da4c-0428-45e1-b502-41245927f5af\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" Mar 08 22:23:40.255288 master-0 kubenswrapper[29458]: I0308 22:23:40.255034 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" Mar 08 22:23:40.256450 master-0 kubenswrapper[29458]: I0308 22:23:40.256412 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.270482 master-0 kubenswrapper[29458]: I0308 22:23:40.270394 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96hv\" (UniqueName: \"kubernetes.io/projected/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-kube-api-access-n96hv\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.270882 master-0 kubenswrapper[29458]: I0308 22:23:40.270867 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.348844 master-0 kubenswrapper[29458]: I0308 22:23:40.340171 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dd85765-t86rt"] Mar 08 22:23:40.348844 master-0 kubenswrapper[29458]: I0308 22:23:40.341308 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.370106 master-0 kubenswrapper[29458]: I0308 22:23:40.370041 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379004 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379190 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379228 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-trusted-ca-bundle\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379287 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-serving-cert\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379342 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n96hv\" (UniqueName: \"kubernetes.io/projected/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-kube-api-access-n96hv\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379371 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-service-ca\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379425 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpzf8\" (UniqueName: \"kubernetes.io/projected/83c10497-e6e1-42b9-9537-493c1e43f8a3-kube-api-access-bpzf8\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379456 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-oauth-serving-cert\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379603 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-config\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.380409 master-0 kubenswrapper[29458]: I0308 22:23:40.379676 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-oauth-config\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.389795 master-0 kubenswrapper[29458]: I0308 22:23:40.384337 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.389795 master-0 kubenswrapper[29458]: I0308 22:23:40.385472 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.411086 master-0 kubenswrapper[29458]: I0308 22:23:40.410897 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dd85765-t86rt"] Mar 08 22:23:40.420084 master-0 kubenswrapper[29458]: I0308 22:23:40.420022 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n96hv\" (UniqueName: \"kubernetes.io/projected/70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc-kube-api-access-n96hv\") pod \"nmstate-console-plugin-5dcbbd79cf-5jbhv\" (UID: \"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.437260 master-0 kubenswrapper[29458]: W0308 22:23:40.437204 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0732e393_4dfd_45bf_985c_b0a94d2af91c.slice/crio-5e20a06fc4a94af5893850e9271b3e2d52b0262af324391720586b9d4e2916ab WatchSource:0}: Error finding container 5e20a06fc4a94af5893850e9271b3e2d52b0262af324391720586b9d4e2916ab: Status 404 returned error can't find the container with id 5e20a06fc4a94af5893850e9271b3e2d52b0262af324391720586b9d4e2916ab Mar 08 22:23:40.481999 master-0 kubenswrapper[29458]: I0308 22:23:40.481930 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-trusted-ca-bundle\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.481999 master-0 kubenswrapper[29458]: I0308 22:23:40.481999 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-serving-cert\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.482373 master-0 kubenswrapper[29458]: I0308 22:23:40.482025 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-service-ca\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.482373 master-0 kubenswrapper[29458]: I0308 22:23:40.482043 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpzf8\" (UniqueName: \"kubernetes.io/projected/83c10497-e6e1-42b9-9537-493c1e43f8a3-kube-api-access-bpzf8\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.482373 master-0 kubenswrapper[29458]: I0308 22:23:40.482063 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-oauth-serving-cert\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.482373 master-0 kubenswrapper[29458]: I0308 22:23:40.482180 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-config\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.482373 master-0 kubenswrapper[29458]: I0308 22:23:40.482204 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-oauth-config\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.484170 master-0 kubenswrapper[29458]: I0308 22:23:40.484120 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-oauth-serving-cert\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.484697 master-0 kubenswrapper[29458]: I0308 22:23:40.484662 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-trusted-ca-bundle\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.485458 master-0 kubenswrapper[29458]: I0308 22:23:40.485417 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-config\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.491997 master-0 kubenswrapper[29458]: I0308 22:23:40.489563 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-oauth-config\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.491997 master-0 kubenswrapper[29458]: I0308 22:23:40.489887 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c10497-e6e1-42b9-9537-493c1e43f8a3-service-ca\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.528104 master-0 kubenswrapper[29458]: I0308 22:23:40.528007 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c10497-e6e1-42b9-9537-493c1e43f8a3-console-serving-cert\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.539001 master-0 kubenswrapper[29458]: I0308 22:23:40.538068 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpzf8\" (UniqueName: \"kubernetes.io/projected/83c10497-e6e1-42b9-9537-493c1e43f8a3-kube-api-access-bpzf8\") pod \"console-65dd85765-t86rt\" (UID: \"83c10497-e6e1-42b9-9537-493c1e43f8a3\") " pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.564250 master-0 kubenswrapper[29458]: I0308 22:23:40.563812 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" Mar 08 22:23:40.689110 master-0 kubenswrapper[29458]: I0308 22:23:40.688864 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.692659 master-0 kubenswrapper[29458]: I0308 22:23:40.692604 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c7fca44-ee48-4a08-82ab-126dc4a4e7e5-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-k9hr7\" (UID: \"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:40.693685 master-0 kubenswrapper[29458]: I0308 22:23:40.693650 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:40.848781 master-0 kubenswrapper[29458]: I0308 22:23:40.848705 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-s6bc4"] Mar 08 22:23:40.875822 master-0 kubenswrapper[29458]: I0308 22:23:40.875746 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv"] Mar 08 22:23:40.890982 master-0 kubenswrapper[29458]: I0308 22:23:40.890904 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:41.098514 master-0 kubenswrapper[29458]: I0308 22:23:41.098446 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" event={"ID":"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc","Type":"ContainerStarted","Data":"e916c3036358082fb3450e71a36e0abf67914cbfa60aa7f204c750a4df07d2ac"} Mar 08 22:23:41.100677 master-0 kubenswrapper[29458]: I0308 22:23:41.100642 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-n7f29" event={"ID":"0732e393-4dfd-45bf-985c-b0a94d2af91c","Type":"ContainerStarted","Data":"5e20a06fc4a94af5893850e9271b3e2d52b0262af324391720586b9d4e2916ab"} Mar 08 22:23:41.108779 master-0 kubenswrapper[29458]: I0308 22:23:41.102345 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" event={"ID":"89c5da4c-0428-45e1-b502-41245927f5af","Type":"ContainerStarted","Data":"5dbb0eff6d0972c547a67ab9a197760d75592ff1a7753895507e32e0e5d7ad12"} Mar 08 22:23:41.119401 master-0 kubenswrapper[29458]: I0308 22:23:41.118920 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ctk5s" event={"ID":"12233273-d6c6-4698-8d94-cd602da19788","Type":"ContainerStarted","Data":"69308a35e366f983cc1178d11cb46b7862ba9faabe87950d4134679d9294c11b"} Mar 08 22:23:41.119401 master-0 kubenswrapper[29458]: I0308 22:23:41.118988 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ctk5s" event={"ID":"12233273-d6c6-4698-8d94-cd602da19788","Type":"ContainerStarted","Data":"f69ae496f7364e060ae6d2081110dc946e40de1f4c97b795991f4602d6d5da38"} Mar 08 22:23:41.119401 master-0 kubenswrapper[29458]: I0308 22:23:41.119129 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ctk5s" Mar 08 22:23:41.126578 master-0 kubenswrapper[29458]: I0308 22:23:41.126482 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-xqsrg" event={"ID":"0d686fce-80f1-40df-997c-d0273ef978f5","Type":"ContainerStarted","Data":"1c10224121dea3114f59d2a418c0e2b4cf03dd32c511c417d67c3687901295f4"} Mar 08 22:23:41.126788 master-0 kubenswrapper[29458]: I0308 22:23:41.126701 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:41.144196 master-0 kubenswrapper[29458]: I0308 22:23:41.142106 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ctk5s" podStartSLOduration=4.142044838 podStartE2EDuration="4.142044838s" podCreationTimestamp="2026-03-08 22:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:23:41.139364959 +0000 UTC m=+590.427422561" watchObservedRunningTime="2026-03-08 22:23:41.142044838 +0000 UTC m=+590.430102430" Mar 08 22:23:41.188332 master-0 kubenswrapper[29458]: I0308 22:23:41.188202 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-xqsrg" podStartSLOduration=3.032325174 podStartE2EDuration="4.184509658s" podCreationTimestamp="2026-03-08 22:23:37 +0000 UTC" firstStartedPulling="2026-03-08 22:23:38.864972949 +0000 UTC m=+588.153030541" lastFinishedPulling="2026-03-08 22:23:40.017157433 +0000 UTC m=+589.305215025" observedRunningTime="2026-03-08 22:23:41.168252531 +0000 UTC m=+590.456310133" watchObservedRunningTime="2026-03-08 22:23:41.184509658 +0000 UTC m=+590.472567270" Mar 08 22:23:41.200386 master-0 kubenswrapper[29458]: I0308 22:23:41.200275 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dd85765-t86rt"] Mar 08 22:23:41.200588 master-0 kubenswrapper[29458]: W0308 22:23:41.200530 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83c10497_e6e1_42b9_9537_493c1e43f8a3.slice/crio-e03ef28262b85c8df82897826f7e1f2433340115102774f8eb0bd9d74c0a3bee WatchSource:0}: Error finding container e03ef28262b85c8df82897826f7e1f2433340115102774f8eb0bd9d74c0a3bee: Status 404 returned error can't find the container with id e03ef28262b85c8df82897826f7e1f2433340115102774f8eb0bd9d74c0a3bee Mar 08 22:23:41.386892 master-0 kubenswrapper[29458]: I0308 22:23:41.386842 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7"] Mar 08 22:23:42.136652 master-0 kubenswrapper[29458]: I0308 22:23:42.136572 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" event={"ID":"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5","Type":"ContainerStarted","Data":"49a729adc44ee3352ea0fcc792d97acdc20f3a023021097048fa0aad78da8c8e"} Mar 08 22:23:42.139539 master-0 kubenswrapper[29458]: I0308 22:23:42.139435 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dd85765-t86rt" event={"ID":"83c10497-e6e1-42b9-9537-493c1e43f8a3","Type":"ContainerStarted","Data":"935b0040bda781acb7f844ff5226ec4b2e5a25f8b4cadda184bc9978b27a1417"} Mar 08 22:23:42.139539 master-0 kubenswrapper[29458]: I0308 22:23:42.139525 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dd85765-t86rt" event={"ID":"83c10497-e6e1-42b9-9537-493c1e43f8a3","Type":"ContainerStarted","Data":"e03ef28262b85c8df82897826f7e1f2433340115102774f8eb0bd9d74c0a3bee"} Mar 08 22:23:42.177533 master-0 kubenswrapper[29458]: I0308 22:23:42.177170 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dd85765-t86rt" podStartSLOduration=2.176776577 podStartE2EDuration="2.176776577s" podCreationTimestamp="2026-03-08 22:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:23:42.168468073 +0000 UTC m=+591.456525665" watchObservedRunningTime="2026-03-08 22:23:42.176776577 +0000 UTC m=+591.464834169" Mar 08 22:23:48.217114 master-0 kubenswrapper[29458]: I0308 22:23:48.217005 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-n7f29" event={"ID":"0732e393-4dfd-45bf-985c-b0a94d2af91c","Type":"ContainerStarted","Data":"db6f4b05fa81cd43fb45140def396ca1095ae20f109fb34b0be39d5d5601f094"} Mar 08 22:23:48.218695 master-0 kubenswrapper[29458]: I0308 22:23:48.217930 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:48.222939 master-0 kubenswrapper[29458]: I0308 22:23:48.222864 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" event={"ID":"89c5da4c-0428-45e1-b502-41245927f5af","Type":"ContainerStarted","Data":"2fae8c080ca142476b5bd7a1831f29d3218139690bd7f3358a639fe72914bea5"} Mar 08 22:23:48.222939 master-0 kubenswrapper[29458]: I0308 22:23:48.222929 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" event={"ID":"89c5da4c-0428-45e1-b502-41245927f5af","Type":"ContainerStarted","Data":"37c1651780ddebe4bd92eb52ef8eae6716c85b3324d4dabfbe14826225c67f46"} Mar 08 22:23:48.227641 master-0 kubenswrapper[29458]: I0308 22:23:48.227056 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" event={"ID":"5c7fca44-ee48-4a08-82ab-126dc4a4e7e5","Type":"ContainerStarted","Data":"4f592b457956b80dc7aecb31140bcd0469b59263cb6d6f151dd0fffa607007af"} Mar 08 22:23:48.227641 master-0 kubenswrapper[29458]: I0308 22:23:48.227227 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:23:48.229405 master-0 kubenswrapper[29458]: I0308 22:23:48.229310 29458 generic.go:334] "Generic (PLEG): container finished" podID="d696add8-6964-4b4a-b01e-a64f641cc597" containerID="2dbaac1622cfa04b31fd12b2beda08a55cd0d067a080beac1a16bacc81a479ee" exitCode=0 Mar 08 22:23:48.229614 master-0 kubenswrapper[29458]: I0308 22:23:48.229396 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerDied","Data":"2dbaac1622cfa04b31fd12b2beda08a55cd0d067a080beac1a16bacc81a479ee"} Mar 08 22:23:48.233587 master-0 kubenswrapper[29458]: I0308 22:23:48.233543 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" event={"ID":"24429483-3579-4fd0-878d-7e7db6af4f65","Type":"ContainerStarted","Data":"152b4e052288ffbfc88e64318257497e2859a61185df6f4d7e14914c58c386e6"} Mar 08 22:23:48.236195 master-0 kubenswrapper[29458]: I0308 22:23:48.236164 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:48.237482 master-0 kubenswrapper[29458]: I0308 22:23:48.237197 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" event={"ID":"70cb4e01-0bf7-40f4-a0a5-21ca3478f4bc","Type":"ContainerStarted","Data":"946bdd1294ac4e44e52a226ec6ca9e87c702b4f7a67df118b9dd1ba0bcb0a99d"} Mar 08 22:23:48.240970 master-0 kubenswrapper[29458]: I0308 22:23:48.240936 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-xqsrg" Mar 08 22:23:48.265239 master-0 kubenswrapper[29458]: I0308 22:23:48.265124 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-n7f29" podStartSLOduration=2.128798625 podStartE2EDuration="9.265093262s" podCreationTimestamp="2026-03-08 22:23:39 +0000 UTC" firstStartedPulling="2026-03-08 22:23:40.444132217 +0000 UTC m=+589.732189799" lastFinishedPulling="2026-03-08 22:23:47.580426834 +0000 UTC m=+596.868484436" observedRunningTime="2026-03-08 22:23:48.253461132 +0000 UTC m=+597.541518754" watchObservedRunningTime="2026-03-08 22:23:48.265093262 +0000 UTC m=+597.553150884" Mar 08 22:23:48.325159 master-0 kubenswrapper[29458]: I0308 22:23:48.325043 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" podStartSLOduration=2.356108008 podStartE2EDuration="11.325015859s" podCreationTimestamp="2026-03-08 22:23:37 +0000 UTC" firstStartedPulling="2026-03-08 22:23:38.585205285 +0000 UTC m=+587.873262887" lastFinishedPulling="2026-03-08 22:23:47.554113106 +0000 UTC m=+596.842170738" observedRunningTime="2026-03-08 22:23:48.315430429 +0000 UTC m=+597.603488021" watchObservedRunningTime="2026-03-08 22:23:48.325015859 +0000 UTC m=+597.613073471" Mar 08 22:23:48.421791 master-0 kubenswrapper[29458]: I0308 22:23:48.421682 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5jbhv" podStartSLOduration=1.796972218 podStartE2EDuration="8.421657432s" podCreationTimestamp="2026-03-08 22:23:40 +0000 UTC" firstStartedPulling="2026-03-08 22:23:40.922745446 +0000 UTC m=+590.210803038" lastFinishedPulling="2026-03-08 22:23:47.54743066 +0000 UTC m=+596.835488252" observedRunningTime="2026-03-08 22:23:48.41876548 +0000 UTC m=+597.706823072" watchObservedRunningTime="2026-03-08 22:23:48.421657432 +0000 UTC m=+597.709715044" Mar 08 22:23:48.509058 master-0 kubenswrapper[29458]: I0308 22:23:48.508868 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" podStartSLOduration=3.352298467 podStartE2EDuration="9.508839099s" podCreationTimestamp="2026-03-08 22:23:39 +0000 UTC" firstStartedPulling="2026-03-08 22:23:41.39096672 +0000 UTC m=+590.679024312" lastFinishedPulling="2026-03-08 22:23:47.547507342 +0000 UTC m=+596.835564944" observedRunningTime="2026-03-08 22:23:48.459523538 +0000 UTC m=+597.747581150" watchObservedRunningTime="2026-03-08 22:23:48.508839099 +0000 UTC m=+597.796896691" Mar 08 22:23:48.511943 master-0 kubenswrapper[29458]: I0308 22:23:48.511833 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-s6bc4" podStartSLOduration=2.832321352 podStartE2EDuration="9.511818114s" podCreationTimestamp="2026-03-08 22:23:39 +0000 UTC" firstStartedPulling="2026-03-08 22:23:40.880969193 +0000 UTC m=+590.169026785" lastFinishedPulling="2026-03-08 22:23:47.560465955 +0000 UTC m=+596.848523547" observedRunningTime="2026-03-08 22:23:48.496289626 +0000 UTC m=+597.784347208" watchObservedRunningTime="2026-03-08 22:23:48.511818114 +0000 UTC m=+597.799875706" Mar 08 22:23:49.247814 master-0 kubenswrapper[29458]: I0308 22:23:49.247456 29458 generic.go:334] "Generic (PLEG): container finished" podID="d696add8-6964-4b4a-b01e-a64f641cc597" containerID="059c81052a2fb4b5fe9287fb895ff53a38b941128d111b5d8711a7208aaa9512" exitCode=0 Mar 08 22:23:49.248413 master-0 kubenswrapper[29458]: I0308 22:23:49.248335 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerDied","Data":"059c81052a2fb4b5fe9287fb895ff53a38b941128d111b5d8711a7208aaa9512"} Mar 08 22:23:50.265580 master-0 kubenswrapper[29458]: I0308 22:23:50.265524 29458 generic.go:334] "Generic (PLEG): container finished" podID="d696add8-6964-4b4a-b01e-a64f641cc597" containerID="bdbf46e47883b7e8fdf7599be40df3aaf329fcf1dc4ec5406ba040fa13f132aa" exitCode=0 Mar 08 22:23:50.267202 master-0 kubenswrapper[29458]: I0308 22:23:50.267164 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerDied","Data":"bdbf46e47883b7e8fdf7599be40df3aaf329fcf1dc4ec5406ba040fa13f132aa"} Mar 08 22:23:50.694714 master-0 kubenswrapper[29458]: I0308 22:23:50.694647 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:50.694714 master-0 kubenswrapper[29458]: I0308 22:23:50.694715 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:50.702398 master-0 kubenswrapper[29458]: I0308 22:23:50.702345 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:51.288116 master-0 kubenswrapper[29458]: I0308 22:23:51.287971 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"8c9692e63bf0374897805a4fa5b9c7e6e89cf2f1ea982c1e96602023d4ead1c4"} Mar 08 22:23:51.288116 master-0 kubenswrapper[29458]: I0308 22:23:51.288051 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"ab4933e2e663e6e0d4e476da63cb5da80e407f4ced08500865a8fa93a2f4cf50"} Mar 08 22:23:51.288116 master-0 kubenswrapper[29458]: I0308 22:23:51.288065 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"1daa0ea59e18c63c2741168d9eb89a28494152c4b9b253a6ce48b47c3d51ec40"} Mar 08 22:23:51.288116 master-0 kubenswrapper[29458]: I0308 22:23:51.288093 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"720413a243f96d1544dc29decacbbcc1c5275e1061fb57aa7f588d7d9e0a7cb2"} Mar 08 22:23:51.298946 master-0 kubenswrapper[29458]: I0308 22:23:51.292701 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dd85765-t86rt" Mar 08 22:23:51.394940 master-0 kubenswrapper[29458]: I0308 22:23:51.394851 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d5f8cb68d-7n2g4"] Mar 08 22:23:52.312705 master-0 kubenswrapper[29458]: I0308 22:23:52.312577 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"75d97ef9326df998575e9eb6b69c5f9272e476ec97bb2228eb8363ec1590455f"} Mar 08 22:23:52.312705 master-0 kubenswrapper[29458]: I0308 22:23:52.312666 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sw9q" event={"ID":"d696add8-6964-4b4a-b01e-a64f641cc597","Type":"ContainerStarted","Data":"d70b506a7d62f7a8f89a31659541d88c4e4bb9a24baa5f8ec6431e940ae65b3f"} Mar 08 22:23:52.314234 master-0 kubenswrapper[29458]: I0308 22:23:52.312842 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:52.418205 master-0 kubenswrapper[29458]: I0308 22:23:52.418040 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4sw9q" podStartSLOduration=6.769830112 podStartE2EDuration="15.418003165s" podCreationTimestamp="2026-03-08 22:23:37 +0000 UTC" firstStartedPulling="2026-03-08 22:23:38.90980583 +0000 UTC m=+588.197863422" lastFinishedPulling="2026-03-08 22:23:47.557978853 +0000 UTC m=+596.846036475" observedRunningTime="2026-03-08 22:23:52.399536144 +0000 UTC m=+601.687593746" watchObservedRunningTime="2026-03-08 22:23:52.418003165 +0000 UTC m=+601.706060767" Mar 08 22:23:53.782400 master-0 kubenswrapper[29458]: I0308 22:23:53.782314 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:53.843749 master-0 kubenswrapper[29458]: I0308 22:23:53.843685 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:23:55.414850 master-0 kubenswrapper[29458]: I0308 22:23:55.414750 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-n7f29" Mar 08 22:23:58.114651 master-0 kubenswrapper[29458]: I0308 22:23:58.114580 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-l8xtg" Mar 08 22:23:59.734201 master-0 kubenswrapper[29458]: I0308 22:23:59.734122 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ctk5s" Mar 08 22:24:00.899284 master-0 kubenswrapper[29458]: I0308 22:24:00.899220 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-k9hr7" Mar 08 22:24:06.258798 master-0 kubenswrapper[29458]: I0308 22:24:06.258670 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-f9w48"] Mar 08 22:24:06.261263 master-0 kubenswrapper[29458]: I0308 22:24:06.261206 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.268859 master-0 kubenswrapper[29458]: I0308 22:24:06.264332 29458 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 08 22:24:06.320647 master-0 kubenswrapper[29458]: I0308 22:24:06.320571 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-f9w48"] Mar 08 22:24:06.372214 master-0 kubenswrapper[29458]: I0308 22:24:06.372022 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-file-lock-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.372214 master-0 kubenswrapper[29458]: I0308 22:24:06.372130 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-registration-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.372214 master-0 kubenswrapper[29458]: I0308 22:24:06.372227 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-sys\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.372695 master-0 kubenswrapper[29458]: I0308 22:24:06.372267 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-node-plugin-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.372695 master-0 kubenswrapper[29458]: I0308 22:24:06.372440 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/570b4a0c-c835-4d94-a62a-43dcc29d0e68-metrics-cert\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.372820 master-0 kubenswrapper[29458]: I0308 22:24:06.372718 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-lvmd-config\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.372890 master-0 kubenswrapper[29458]: I0308 22:24:06.372850 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-device-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.375391 master-0 kubenswrapper[29458]: I0308 22:24:06.373224 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6p2f\" (UniqueName: \"kubernetes.io/projected/570b4a0c-c835-4d94-a62a-43dcc29d0e68-kube-api-access-d6p2f\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.375391 master-0 kubenswrapper[29458]: I0308 22:24:06.373451 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-run-udev\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.375391 master-0 kubenswrapper[29458]: I0308 22:24:06.373503 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-pod-volumes-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.375391 master-0 kubenswrapper[29458]: I0308 22:24:06.373668 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-csi-plugin-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.475363 master-0 kubenswrapper[29458]: I0308 22:24:06.475264 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-file-lock-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.475363 master-0 kubenswrapper[29458]: I0308 22:24:06.475352 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-registration-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476353 master-0 kubenswrapper[29458]: I0308 22:24:06.476270 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-sys\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476427 master-0 kubenswrapper[29458]: I0308 22:24:06.476388 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-registration-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476427 master-0 kubenswrapper[29458]: I0308 22:24:06.476407 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-node-plugin-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476506 master-0 kubenswrapper[29458]: I0308 22:24:06.476432 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-file-lock-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476506 master-0 kubenswrapper[29458]: I0308 22:24:06.476467 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-sys\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476611 master-0 kubenswrapper[29458]: I0308 22:24:06.476558 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/570b4a0c-c835-4d94-a62a-43dcc29d0e68-metrics-cert\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476669 master-0 kubenswrapper[29458]: I0308 22:24:06.476653 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-lvmd-config\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476705 master-0 kubenswrapper[29458]: I0308 22:24:06.476683 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-device-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476822 master-0 kubenswrapper[29458]: I0308 22:24:06.476783 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-node-plugin-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476822 master-0 kubenswrapper[29458]: I0308 22:24:06.476814 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6p2f\" (UniqueName: \"kubernetes.io/projected/570b4a0c-c835-4d94-a62a-43dcc29d0e68-kube-api-access-d6p2f\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476952 master-0 kubenswrapper[29458]: I0308 22:24:06.476926 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-run-udev\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476998 master-0 kubenswrapper[29458]: I0308 22:24:06.476959 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-lvmd-config\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.476998 master-0 kubenswrapper[29458]: I0308 22:24:06.476979 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-pod-volumes-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.477112 master-0 kubenswrapper[29458]: I0308 22:24:06.477095 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-csi-plugin-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.477478 master-0 kubenswrapper[29458]: I0308 22:24:06.477438 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-device-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.477586 master-0 kubenswrapper[29458]: I0308 22:24:06.477560 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-csi-plugin-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.477638 master-0 kubenswrapper[29458]: I0308 22:24:06.477582 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-run-udev\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.477638 master-0 kubenswrapper[29458]: I0308 22:24:06.477627 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/570b4a0c-c835-4d94-a62a-43dcc29d0e68-pod-volumes-dir\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.481145 master-0 kubenswrapper[29458]: I0308 22:24:06.481104 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/570b4a0c-c835-4d94-a62a-43dcc29d0e68-metrics-cert\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.505917 master-0 kubenswrapper[29458]: I0308 22:24:06.505626 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6p2f\" (UniqueName: \"kubernetes.io/projected/570b4a0c-c835-4d94-a62a-43dcc29d0e68-kube-api-access-d6p2f\") pod \"vg-manager-f9w48\" (UID: \"570b4a0c-c835-4d94-a62a-43dcc29d0e68\") " pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:06.629986 master-0 kubenswrapper[29458]: I0308 22:24:06.629892 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:07.142412 master-0 kubenswrapper[29458]: W0308 22:24:07.142356 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod570b4a0c_c835_4d94_a62a_43dcc29d0e68.slice/crio-4dc20d894edd39fe7c3300b73a2716aa4103816e6e98d838f3d1fe0e376535d0 WatchSource:0}: Error finding container 4dc20d894edd39fe7c3300b73a2716aa4103816e6e98d838f3d1fe0e376535d0: Status 404 returned error can't find the container with id 4dc20d894edd39fe7c3300b73a2716aa4103816e6e98d838f3d1fe0e376535d0 Mar 08 22:24:07.145844 master-0 kubenswrapper[29458]: I0308 22:24:07.145805 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-f9w48"] Mar 08 22:24:07.503847 master-0 kubenswrapper[29458]: I0308 22:24:07.503646 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f9w48" event={"ID":"570b4a0c-c835-4d94-a62a-43dcc29d0e68","Type":"ContainerStarted","Data":"3030f3418a8adf9247555f4f9f6e9f0035c5611b8a429072504268151871390b"} Mar 08 22:24:07.503847 master-0 kubenswrapper[29458]: I0308 22:24:07.503719 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f9w48" event={"ID":"570b4a0c-c835-4d94-a62a-43dcc29d0e68","Type":"ContainerStarted","Data":"4dc20d894edd39fe7c3300b73a2716aa4103816e6e98d838f3d1fe0e376535d0"} Mar 08 22:24:07.535442 master-0 kubenswrapper[29458]: I0308 22:24:07.535223 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-f9w48" podStartSLOduration=1.535200738 podStartE2EDuration="1.535200738s" podCreationTimestamp="2026-03-08 22:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 22:24:07.528271065 +0000 UTC m=+616.816328657" watchObservedRunningTime="2026-03-08 22:24:07.535200738 +0000 UTC m=+616.823258330" Mar 08 22:24:08.793212 master-0 kubenswrapper[29458]: I0308 22:24:08.789434 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4sw9q" Mar 08 22:24:09.527222 master-0 kubenswrapper[29458]: I0308 22:24:09.526733 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-f9w48_570b4a0c-c835-4d94-a62a-43dcc29d0e68/vg-manager/0.log" Mar 08 22:24:09.527222 master-0 kubenswrapper[29458]: I0308 22:24:09.526798 29458 generic.go:334] "Generic (PLEG): container finished" podID="570b4a0c-c835-4d94-a62a-43dcc29d0e68" containerID="3030f3418a8adf9247555f4f9f6e9f0035c5611b8a429072504268151871390b" exitCode=1 Mar 08 22:24:09.527222 master-0 kubenswrapper[29458]: I0308 22:24:09.526839 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f9w48" event={"ID":"570b4a0c-c835-4d94-a62a-43dcc29d0e68","Type":"ContainerDied","Data":"3030f3418a8adf9247555f4f9f6e9f0035c5611b8a429072504268151871390b"} Mar 08 22:24:09.533285 master-0 kubenswrapper[29458]: I0308 22:24:09.527640 29458 scope.go:117] "RemoveContainer" containerID="3030f3418a8adf9247555f4f9f6e9f0035c5611b8a429072504268151871390b" Mar 08 22:24:09.911888 master-0 kubenswrapper[29458]: I0308 22:24:09.911777 29458 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 08 22:24:10.523901 master-0 kubenswrapper[29458]: I0308 22:24:10.523606 29458 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-08T22:24:09.911855032Z","Handler":null,"Name":""} Mar 08 22:24:10.525953 master-0 kubenswrapper[29458]: I0308 22:24:10.525895 29458 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 08 22:24:10.525953 master-0 kubenswrapper[29458]: I0308 22:24:10.525941 29458 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 08 22:24:10.547975 master-0 kubenswrapper[29458]: I0308 22:24:10.547892 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-f9w48_570b4a0c-c835-4d94-a62a-43dcc29d0e68/vg-manager/0.log" Mar 08 22:24:10.547975 master-0 kubenswrapper[29458]: I0308 22:24:10.547965 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f9w48" event={"ID":"570b4a0c-c835-4d94-a62a-43dcc29d0e68","Type":"ContainerStarted","Data":"11955c59f1aca5a3e63eedef926a578de83f742b0bfba9a53cdeade3982125c7"} Mar 08 22:24:16.437345 master-0 kubenswrapper[29458]: I0308 22:24:16.437232 29458 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d5f8cb68d-7n2g4" podUID="81607a56-08b4-4113-94bb-d6065b7809d5" containerName="console" containerID="cri-o://ba058a49db2cc8fa08b4f5d3c89f5bc1b63aab7171686ec8cdd490108fb2a5ea" gracePeriod=15 Mar 08 22:24:16.649331 master-0 kubenswrapper[29458]: I0308 22:24:16.648480 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:16.653314 master-0 kubenswrapper[29458]: I0308 22:24:16.651836 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:16.667529 master-0 kubenswrapper[29458]: I0308 22:24:16.667481 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d5f8cb68d-7n2g4_81607a56-08b4-4113-94bb-d6065b7809d5/console/0.log" Mar 08 22:24:16.667631 master-0 kubenswrapper[29458]: I0308 22:24:16.667541 29458 generic.go:334] "Generic (PLEG): container finished" podID="81607a56-08b4-4113-94bb-d6065b7809d5" containerID="ba058a49db2cc8fa08b4f5d3c89f5bc1b63aab7171686ec8cdd490108fb2a5ea" exitCode=2 Mar 08 22:24:16.667631 master-0 kubenswrapper[29458]: I0308 22:24:16.667580 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5f8cb68d-7n2g4" event={"ID":"81607a56-08b4-4113-94bb-d6065b7809d5","Type":"ContainerDied","Data":"ba058a49db2cc8fa08b4f5d3c89f5bc1b63aab7171686ec8cdd490108fb2a5ea"} Mar 08 22:24:17.127907 master-0 kubenswrapper[29458]: I0308 22:24:17.127831 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d5f8cb68d-7n2g4_81607a56-08b4-4113-94bb-d6065b7809d5/console/0.log" Mar 08 22:24:17.128222 master-0 kubenswrapper[29458]: I0308 22:24:17.127944 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:24:17.267710 master-0 kubenswrapper[29458]: I0308 22:24:17.267595 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-console-config\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.267710 master-0 kubenswrapper[29458]: I0308 22:24:17.267696 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2msk\" (UniqueName: \"kubernetes.io/projected/81607a56-08b4-4113-94bb-d6065b7809d5-kube-api-access-j2msk\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.268195 master-0 kubenswrapper[29458]: I0308 22:24:17.267751 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-oauth-config\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.268195 master-0 kubenswrapper[29458]: I0308 22:24:17.267780 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-oauth-serving-cert\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.268687 master-0 kubenswrapper[29458]: I0308 22:24:17.268650 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-trusted-ca-bundle\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.268791 master-0 kubenswrapper[29458]: I0308 22:24:17.268760 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-serving-cert\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.268925 master-0 kubenswrapper[29458]: I0308 22:24:17.268891 29458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-service-ca\") pod \"81607a56-08b4-4113-94bb-d6065b7809d5\" (UID: \"81607a56-08b4-4113-94bb-d6065b7809d5\") " Mar 08 22:24:17.269064 master-0 kubenswrapper[29458]: I0308 22:24:17.268996 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:24:17.269155 master-0 kubenswrapper[29458]: I0308 22:24:17.268990 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-console-config" (OuterVolumeSpecName: "console-config") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:24:17.269209 master-0 kubenswrapper[29458]: I0308 22:24:17.269171 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:24:17.269772 master-0 kubenswrapper[29458]: I0308 22:24:17.269696 29458 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.269829 master-0 kubenswrapper[29458]: I0308 22:24:17.269771 29458 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.269829 master-0 kubenswrapper[29458]: I0308 22:24:17.269799 29458 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.270172 master-0 kubenswrapper[29458]: I0308 22:24:17.269934 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-service-ca" (OuterVolumeSpecName: "service-ca") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 22:24:17.271918 master-0 kubenswrapper[29458]: I0308 22:24:17.271832 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81607a56-08b4-4113-94bb-d6065b7809d5-kube-api-access-j2msk" (OuterVolumeSpecName: "kube-api-access-j2msk") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "kube-api-access-j2msk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 22:24:17.272771 master-0 kubenswrapper[29458]: I0308 22:24:17.272695 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:24:17.273899 master-0 kubenswrapper[29458]: I0308 22:24:17.273785 29458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "81607a56-08b4-4113-94bb-d6065b7809d5" (UID: "81607a56-08b4-4113-94bb-d6065b7809d5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 22:24:17.372770 master-0 kubenswrapper[29458]: I0308 22:24:17.372577 29458 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2msk\" (UniqueName: \"kubernetes.io/projected/81607a56-08b4-4113-94bb-d6065b7809d5-kube-api-access-j2msk\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.372770 master-0 kubenswrapper[29458]: I0308 22:24:17.372640 29458 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.372770 master-0 kubenswrapper[29458]: I0308 22:24:17.372659 29458 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81607a56-08b4-4113-94bb-d6065b7809d5-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.372770 master-0 kubenswrapper[29458]: I0308 22:24:17.372684 29458 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81607a56-08b4-4113-94bb-d6065b7809d5-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 22:24:17.680855 master-0 kubenswrapper[29458]: I0308 22:24:17.680691 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d5f8cb68d-7n2g4_81607a56-08b4-4113-94bb-d6065b7809d5/console/0.log" Mar 08 22:24:17.681548 master-0 kubenswrapper[29458]: I0308 22:24:17.680849 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5f8cb68d-7n2g4" event={"ID":"81607a56-08b4-4113-94bb-d6065b7809d5","Type":"ContainerDied","Data":"77b4b754119442b06ec45e32c1dde13e65debe6d780de51e17716ebf552e1b5e"} Mar 08 22:24:17.681548 master-0 kubenswrapper[29458]: I0308 22:24:17.680897 29458 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5f8cb68d-7n2g4" Mar 08 22:24:17.681548 master-0 kubenswrapper[29458]: I0308 22:24:17.680914 29458 scope.go:117] "RemoveContainer" containerID="ba058a49db2cc8fa08b4f5d3c89f5bc1b63aab7171686ec8cdd490108fb2a5ea" Mar 08 22:24:17.681548 master-0 kubenswrapper[29458]: I0308 22:24:17.681435 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:17.682350 master-0 kubenswrapper[29458]: I0308 22:24:17.682323 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-f9w48" Mar 08 22:24:17.794949 master-0 kubenswrapper[29458]: I0308 22:24:17.791027 29458 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d5f8cb68d-7n2g4"] Mar 08 22:24:17.801118 master-0 kubenswrapper[29458]: I0308 22:24:17.800859 29458 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d5f8cb68d-7n2g4"] Mar 08 22:24:18.985117 master-0 kubenswrapper[29458]: I0308 22:24:18.985032 29458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81607a56-08b4-4113-94bb-d6065b7809d5" path="/var/lib/kubelet/pods/81607a56-08b4-4113-94bb-d6065b7809d5/volumes" Mar 08 22:24:19.771334 master-0 kubenswrapper[29458]: I0308 22:24:19.771244 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5jstf"] Mar 08 22:24:19.771762 master-0 kubenswrapper[29458]: E0308 22:24:19.771733 29458 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81607a56-08b4-4113-94bb-d6065b7809d5" containerName="console" Mar 08 22:24:19.771762 master-0 kubenswrapper[29458]: I0308 22:24:19.771756 29458 state_mem.go:107] "Deleted CPUSet assignment" podUID="81607a56-08b4-4113-94bb-d6065b7809d5" containerName="console" Mar 08 22:24:19.772034 master-0 kubenswrapper[29458]: I0308 22:24:19.772005 29458 memory_manager.go:354] "RemoveStaleState removing state" podUID="81607a56-08b4-4113-94bb-d6065b7809d5" containerName="console" Mar 08 22:24:19.775604 master-0 kubenswrapper[29458]: I0308 22:24:19.775568 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:19.787861 master-0 kubenswrapper[29458]: I0308 22:24:19.787815 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 08 22:24:19.788158 master-0 kubenswrapper[29458]: I0308 22:24:19.788104 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 08 22:24:19.827119 master-0 kubenswrapper[29458]: I0308 22:24:19.826261 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5jstf"] Mar 08 22:24:19.841658 master-0 kubenswrapper[29458]: I0308 22:24:19.839710 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tb66\" (UniqueName: \"kubernetes.io/projected/2416945f-6e24-45fe-988f-6e0720015b5e-kube-api-access-2tb66\") pod \"openstack-operator-index-5jstf\" (UID: \"2416945f-6e24-45fe-988f-6e0720015b5e\") " pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:19.952171 master-0 kubenswrapper[29458]: I0308 22:24:19.941761 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tb66\" (UniqueName: \"kubernetes.io/projected/2416945f-6e24-45fe-988f-6e0720015b5e-kube-api-access-2tb66\") pod \"openstack-operator-index-5jstf\" (UID: \"2416945f-6e24-45fe-988f-6e0720015b5e\") " pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:19.965234 master-0 kubenswrapper[29458]: I0308 22:24:19.960336 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tb66\" (UniqueName: \"kubernetes.io/projected/2416945f-6e24-45fe-988f-6e0720015b5e-kube-api-access-2tb66\") pod \"openstack-operator-index-5jstf\" (UID: \"2416945f-6e24-45fe-988f-6e0720015b5e\") " pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:20.122817 master-0 kubenswrapper[29458]: I0308 22:24:20.122746 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:20.624489 master-0 kubenswrapper[29458]: I0308 22:24:20.624427 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5jstf"] Mar 08 22:24:20.629347 master-0 kubenswrapper[29458]: W0308 22:24:20.629310 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2416945f_6e24_45fe_988f_6e0720015b5e.slice/crio-7bfe8caf1a20fdc6ab185951861597877b42c55d3392e26a94130a3767fb41da WatchSource:0}: Error finding container 7bfe8caf1a20fdc6ab185951861597877b42c55d3392e26a94130a3767fb41da: Status 404 returned error can't find the container with id 7bfe8caf1a20fdc6ab185951861597877b42c55d3392e26a94130a3767fb41da Mar 08 22:24:20.724431 master-0 kubenswrapper[29458]: I0308 22:24:20.724363 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5jstf" event={"ID":"2416945f-6e24-45fe-988f-6e0720015b5e","Type":"ContainerStarted","Data":"7bfe8caf1a20fdc6ab185951861597877b42c55d3392e26a94130a3767fb41da"} Mar 08 22:24:21.760176 master-0 kubenswrapper[29458]: I0308 22:24:21.759410 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5jstf" event={"ID":"2416945f-6e24-45fe-988f-6e0720015b5e","Type":"ContainerStarted","Data":"2e35a47b7aa90f056e8ee85273fc4d8e3d97759e671b9e537e0635e3d7247f4c"} Mar 08 22:24:21.796796 master-0 kubenswrapper[29458]: I0308 22:24:21.796676 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5jstf" podStartSLOduration=2.005255064 podStartE2EDuration="2.796649808s" podCreationTimestamp="2026-03-08 22:24:19 +0000 UTC" firstStartedPulling="2026-03-08 22:24:20.630867444 +0000 UTC m=+629.918925036" lastFinishedPulling="2026-03-08 22:24:21.422262178 +0000 UTC m=+630.710319780" observedRunningTime="2026-03-08 22:24:21.788361221 +0000 UTC m=+631.076418813" watchObservedRunningTime="2026-03-08 22:24:21.796649808 +0000 UTC m=+631.084707400" Mar 08 22:24:30.123059 master-0 kubenswrapper[29458]: I0308 22:24:30.122958 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:30.123900 master-0 kubenswrapper[29458]: I0308 22:24:30.123349 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:30.162551 master-0 kubenswrapper[29458]: I0308 22:24:30.162472 29458 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:24:30.913746 master-0 kubenswrapper[29458]: I0308 22:24:30.913637 29458 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5jstf" Mar 08 22:29:31.353187 master-0 kubenswrapper[29458]: I0308 22:29:31.352308 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lb7bm/must-gather-xhnmn"] Mar 08 22:29:31.355645 master-0 kubenswrapper[29458]: I0308 22:29:31.354789 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.358611 master-0 kubenswrapper[29458]: I0308 22:29:31.358513 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lb7bm"/"openshift-service-ca.crt" Mar 08 22:29:31.358738 master-0 kubenswrapper[29458]: I0308 22:29:31.358626 29458 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lb7bm"/"kube-root-ca.crt" Mar 08 22:29:31.369104 master-0 kubenswrapper[29458]: I0308 22:29:31.366922 29458 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lb7bm/must-gather-kjjz8"] Mar 08 22:29:31.369104 master-0 kubenswrapper[29458]: I0308 22:29:31.369006 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.387166 master-0 kubenswrapper[29458]: I0308 22:29:31.387064 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lb7bm/must-gather-xhnmn"] Mar 08 22:29:31.406431 master-0 kubenswrapper[29458]: I0308 22:29:31.401019 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lb7bm/must-gather-kjjz8"] Mar 08 22:29:31.518164 master-0 kubenswrapper[29458]: I0308 22:29:31.491842 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2zbz\" (UniqueName: \"kubernetes.io/projected/a4e49940-316d-41ac-9ee3-4eeec804597e-kube-api-access-h2zbz\") pod \"must-gather-kjjz8\" (UID: \"a4e49940-316d-41ac-9ee3-4eeec804597e\") " pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.518164 master-0 kubenswrapper[29458]: I0308 22:29:31.491907 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsrq\" (UniqueName: \"kubernetes.io/projected/cc5e5505-745d-47f4-b125-50e45808291a-kube-api-access-sbsrq\") pod \"must-gather-xhnmn\" (UID: \"cc5e5505-745d-47f4-b125-50e45808291a\") " pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.518164 master-0 kubenswrapper[29458]: I0308 22:29:31.491981 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5e5505-745d-47f4-b125-50e45808291a-must-gather-output\") pod \"must-gather-xhnmn\" (UID: \"cc5e5505-745d-47f4-b125-50e45808291a\") " pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.518164 master-0 kubenswrapper[29458]: I0308 22:29:31.492015 29458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4e49940-316d-41ac-9ee3-4eeec804597e-must-gather-output\") pod \"must-gather-kjjz8\" (UID: \"a4e49940-316d-41ac-9ee3-4eeec804597e\") " pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.593548 master-0 kubenswrapper[29458]: I0308 22:29:31.593468 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2zbz\" (UniqueName: \"kubernetes.io/projected/a4e49940-316d-41ac-9ee3-4eeec804597e-kube-api-access-h2zbz\") pod \"must-gather-kjjz8\" (UID: \"a4e49940-316d-41ac-9ee3-4eeec804597e\") " pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.593548 master-0 kubenswrapper[29458]: I0308 22:29:31.593534 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsrq\" (UniqueName: \"kubernetes.io/projected/cc5e5505-745d-47f4-b125-50e45808291a-kube-api-access-sbsrq\") pod \"must-gather-xhnmn\" (UID: \"cc5e5505-745d-47f4-b125-50e45808291a\") " pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.593860 master-0 kubenswrapper[29458]: I0308 22:29:31.593588 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5e5505-745d-47f4-b125-50e45808291a-must-gather-output\") pod \"must-gather-xhnmn\" (UID: \"cc5e5505-745d-47f4-b125-50e45808291a\") " pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.593860 master-0 kubenswrapper[29458]: I0308 22:29:31.593607 29458 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4e49940-316d-41ac-9ee3-4eeec804597e-must-gather-output\") pod \"must-gather-kjjz8\" (UID: \"a4e49940-316d-41ac-9ee3-4eeec804597e\") " pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.594159 master-0 kubenswrapper[29458]: I0308 22:29:31.594126 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4e49940-316d-41ac-9ee3-4eeec804597e-must-gather-output\") pod \"must-gather-kjjz8\" (UID: \"a4e49940-316d-41ac-9ee3-4eeec804597e\") " pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.594385 master-0 kubenswrapper[29458]: I0308 22:29:31.594344 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5e5505-745d-47f4-b125-50e45808291a-must-gather-output\") pod \"must-gather-xhnmn\" (UID: \"cc5e5505-745d-47f4-b125-50e45808291a\") " pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.609702 master-0 kubenswrapper[29458]: I0308 22:29:31.609580 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2zbz\" (UniqueName: \"kubernetes.io/projected/a4e49940-316d-41ac-9ee3-4eeec804597e-kube-api-access-h2zbz\") pod \"must-gather-kjjz8\" (UID: \"a4e49940-316d-41ac-9ee3-4eeec804597e\") " pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:31.614048 master-0 kubenswrapper[29458]: I0308 22:29:31.613985 29458 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsrq\" (UniqueName: \"kubernetes.io/projected/cc5e5505-745d-47f4-b125-50e45808291a-kube-api-access-sbsrq\") pod \"must-gather-xhnmn\" (UID: \"cc5e5505-745d-47f4-b125-50e45808291a\") " pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.692191 master-0 kubenswrapper[29458]: I0308 22:29:31.692055 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lb7bm/must-gather-xhnmn" Mar 08 22:29:31.712468 master-0 kubenswrapper[29458]: I0308 22:29:31.712367 29458 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lb7bm/must-gather-kjjz8" Mar 08 22:29:32.151529 master-0 kubenswrapper[29458]: I0308 22:29:32.151420 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lb7bm/must-gather-xhnmn"] Mar 08 22:29:32.153127 master-0 kubenswrapper[29458]: W0308 22:29:32.153092 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc5e5505_745d_47f4_b125_50e45808291a.slice/crio-75075a2e73da33166123695da0ea33b427523d0ee073a0e34aa74ba46dcba0a2 WatchSource:0}: Error finding container 75075a2e73da33166123695da0ea33b427523d0ee073a0e34aa74ba46dcba0a2: Status 404 returned error can't find the container with id 75075a2e73da33166123695da0ea33b427523d0ee073a0e34aa74ba46dcba0a2 Mar 08 22:29:32.158609 master-0 kubenswrapper[29458]: I0308 22:29:32.156183 29458 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 22:29:32.280514 master-0 kubenswrapper[29458]: W0308 22:29:32.280448 29458 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e49940_316d_41ac_9ee3_4eeec804597e.slice/crio-ab6d48f8370d97464e0d044f0e79a4688709acf757cfc13b352287f51a855d3e WatchSource:0}: Error finding container ab6d48f8370d97464e0d044f0e79a4688709acf757cfc13b352287f51a855d3e: Status 404 returned error can't find the container with id ab6d48f8370d97464e0d044f0e79a4688709acf757cfc13b352287f51a855d3e Mar 08 22:29:32.289356 master-0 kubenswrapper[29458]: I0308 22:29:32.289273 29458 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lb7bm/must-gather-kjjz8"] Mar 08 22:29:32.567700 master-0 kubenswrapper[29458]: I0308 22:29:32.565820 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lb7bm/must-gather-xhnmn" event={"ID":"cc5e5505-745d-47f4-b125-50e45808291a","Type":"ContainerStarted","Data":"75075a2e73da33166123695da0ea33b427523d0ee073a0e34aa74ba46dcba0a2"} Mar 08 22:29:32.567700 master-0 kubenswrapper[29458]: I0308 22:29:32.567184 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lb7bm/must-gather-kjjz8" event={"ID":"a4e49940-316d-41ac-9ee3-4eeec804597e","Type":"ContainerStarted","Data":"ab6d48f8370d97464e0d044f0e79a4688709acf757cfc13b352287f51a855d3e"} Mar 08 22:29:34.592716 master-0 kubenswrapper[29458]: I0308 22:29:34.592656 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lb7bm/must-gather-kjjz8" event={"ID":"a4e49940-316d-41ac-9ee3-4eeec804597e","Type":"ContainerStarted","Data":"decc5280f64d2370f8237d8270aaae03f4955eabdfde3e27a997ef60e0bd21a3"} Mar 08 22:29:34.592716 master-0 kubenswrapper[29458]: I0308 22:29:34.592715 29458 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lb7bm/must-gather-kjjz8" event={"ID":"a4e49940-316d-41ac-9ee3-4eeec804597e","Type":"ContainerStarted","Data":"d453a45dfac0ab71ca758c8d58e7639336a27151bad772e50c2fec8a13949c58"} Mar 08 22:29:34.626146 master-0 kubenswrapper[29458]: I0308 22:29:34.615203 29458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lb7bm/must-gather-kjjz8" podStartSLOduration=2.36816934 podStartE2EDuration="3.6151813s" podCreationTimestamp="2026-03-08 22:29:31 +0000 UTC" firstStartedPulling="2026-03-08 22:29:32.283403424 +0000 UTC m=+941.571461016" lastFinishedPulling="2026-03-08 22:29:33.530415384 +0000 UTC m=+942.818472976" observedRunningTime="2026-03-08 22:29:34.61036833 +0000 UTC m=+943.898425922" watchObservedRunningTime="2026-03-08 22:29:34.6151813 +0000 UTC m=+943.903238892" Mar 08 22:29:35.939821 master-0 kubenswrapper[29458]: I0308 22:29:35.939714 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-ln9l2_f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/cluster-version-operator/0.log" Mar 08 22:29:36.656710 master-0 kubenswrapper[29458]: I0308 22:29:36.656647 29458 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-ln9l2_f7f6b35a-6cf0-4256-aa4d-0a57d10ce7e9/cluster-version-operator/1.log"